Skip to content

This is the facade for installation and access to the individual components

License

Notifications You must be signed in to change notification settings

TeamHG-Memex/sitehound

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Site Hound

Site Hound (previously THH) is a Domain Discovery Tool that extends the capabilities of commercial search engines using automation and human-in-the-loop (HITL) machine learning, allowing the user efficiently expand the set of relevant web pages within his domain/s or topic/s of interest.
Site Hound is the UI to a more complex set of tools described below. Site Hound was developed under the Memex Program by HyperionGray LLC in partnership with Scrapinghub, Ltd. (2015/2017)

Main Features

  1. Role Based Access Control (RBAC).
  2. Multiple workspaces for keeping things tidy.
  3. Input of keywords, to be included or excluded to the search.
  4. Input of seeds URLs, an initial list of websites that you already know are on-topic.
  5. Expand the list of sites by fetching the keywords on multiple commercial search engines.
  6. Displays screenshots (powered by Splash), title, text, html, relevant terms in the text
  7. Allows the user to iteratively train a topic model based on these results by assigning them into defined values (Relevant/Irrelevant/Neutral), as well as re-scoring the associated keywords.
  8. Allows an unbounded training module based on user-defined categories.
  9. Language detection (powered byApache Tika) and page-type classification
  10. Allows the user to view the trained topic model through a human-interpretable explaination of the model powered by our machine learning explanation toolkit ELI5
  11. Performs a broad crawl of thousand of sites, using Machine Learning provided by DeepDeep-crawler filtering the ones matching the defined domain.
  12. Displays the results in an interface similar to Pinterest for easy scrolling of the findings.
  13. Provides summarized data about the broad crawl and exporting of the broad-crawl results in CSV format.
  14. Provides real time information about the progress of the crawlers.
  15. Allows search of the Dark web via integration with an onion index

Infrastructure Components

When the app starts up, it will try to connect first with all this components

  • Mongo (>3.0.*) stores the data about users, workspace and metadata about the crawlings
  • Elasticsearch (2.0) stores the results of the crawling (screenshots, html, extracted text)
  • Kafka (10.1.*) handles the communication between the backend components regarding the crawlings.

Custom Docker versions of these components are provided with their extra args to set up the stack correctly, in the Containers section below.

Service Components:

This components offer a suite of capabilities to Site Hound. Only the first three components are required.

  • Sitehound-Frontend: The user interface web application that handles auth, metadata and the labeled data.
  • Sitehound-Backend: Performs queries on the Search engines, follows the relevant links and orchestrates the screenshots, text extraction, language identification, page-classification, naive scoring using the cosine difference of TF*IDF, and stores the results sets.
  • Splash: Splash is used for screenshoot and html capturing.
  • HH-DeepDeep: Allows the user to train a page model to perform on-topic crawls
  • [ExcavaTor]: Our own tor index. This is currently a private db. Ask us about it!

Here is the components diagram for reference Components Diagram

Install:

Check the installation guide

How to use it:

Check the walkthrough guide


define hyperion gray

About

This is the facade for installation and access to the individual components

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages