{"id":237,"date":"2012-10-01T12:58:15","date_gmt":"2012-10-01T11:58:15","guid":{"rendered":"http:\/\/xlike.ijs.si\/?p=237"},"modified":"2013-01-21T15:32:42","modified_gmt":"2013-01-21T14:32:42","slug":"data-infrastructure","status":"publish","type":"post","link":"http:\/\/xlike.ijs.si\/data-infrastructure\/","title":{"rendered":"Data infrastructure"},"content":{"rendered":"

\"\"<\/a><\/p>\n

The XLike project is about data analytics, and there can be no data analytics without data. Therefore, one of the first tasks in the project was to acquire a large-scale dataset of news data from the internet.<\/p>\n

 <\/p>\n

We set about this by creating a continuous news aggregator. This piece of software provides a real-time aggregated stream of textual news items published by RSS-enabled news providers across the world. The pipeline performs the following main steps:<\/p>\n

    \n
  1. Periodically crawls a list of news feeds and obtains links to news articles<\/li>\n
  2. Downloads the articles, taking care not to overload any of the hosting servers<\/li>\n
  3. Parses each article to obtain\n
      \n
    1. Potential new RSS sources, to be used in step (1)<\/li>\n
    2. Cleartext version of the article body<\/li>\n<\/ol>\n<\/li>\n
    3. Processes articles with Enrycher<\/li>\n
    4. Exposes two streams of news articles (cleartext and Enrycher-processed) to end users.<\/li>\n<\/ol>\n

      The data sources<\/strong> in step (1) include:<\/p>\n

        \n
      1. roughly 75000 RSS feeds from 1900 sites, found on the internet (see step 3a)<\/li>\n
      2. a subset of Google News collected with a specialized periodic crawler<\/li>\n
      3. private feeds provided by XLike project partners (Bloomberg, STA)<\/li>\n<\/ol>\n

        Check out the real-time demo at http:\/\/newsfeed.ijs.si\/visual_demo\/<\/a> (which does not show the contents of private feeds). The speed is bursty but averages at roughly one article per second.<\/p>\n

        Cleartext extraction<\/strong>
        \nNews articles obtained from the internet need to be cleaned of extraneous markup and content (navigation, headers, footers, ads, \u2026).
        \nWe use a completely heuristics-based approach based on the DOM tree. With the fast libxml package, parsing is not a limiting factor. The core of the heuristic is to take the first large enough DOM element that contains enough promising <p> elements. Failing that, take the first <td> or <div> element which contains enough promising text. The heuristics for the definition of \u201cpromising\u201d rely on relatively standard metrics found in related work as well; most importantly, the amount of markup within a node. Importantly, none of the heuristics are site-specific.
        \nWe achieve precision and recall of about 94% which is comparable to state of the art.<\/p>\n

        Data enrichment<\/strong>
        \nOne of the goals of XLike is to provide advanced enrichment services on top of the cleartext articles. Some tools for English and Slovene are already in place: For those languages, we use Enrycher (
        http:\/\/enrycher.ijs.si\/<\/a>) to annotate each article with named entities appearing in the text (resolved to Wikipedia when possible), discern its sentiment and categorize the document into the general-purpose DMOZ category hierarchy.
        \nWe also annotate articles with a language; detection is provided by a combination of Google\u2019s open-source Compact Language Detector library for mainstream languages and a separate Bayesian classifier. The latter is trained on character trigram frequency distributions in a large public corpus of over a hundred languages. We use CLD first; for the rare cases where the article\u2019s language is not supported by CLD, we fall back to the Bayesian classifier. The error introduced by automatic detection is below 1% (McCandless, 2011).<\/p>\n

        Language distribution<\/strong>
        \nWe cover 37 languages at an average daily volume of 100 articles or more. English is the most frequent with an estimated 54% of articles. German, Spanish and French are represented by 3 to 10 percent of the articles. Other languages comprising at least 1% of the corpus are Chinese, Slovenian, Portugese, Korean, Italian and Arabic.<\/p>\n

        System architecture<\/strong>
        \nThe aggregator consists of several components depicted in the flowchart below. The early stages of the pipeline (article downloader, RSS downloader, cleartext extractor) communicate via a central database; the later stages (cleartext extractor, enrichment services, content distribution services) form a true unidirectional pipeline and communicate thorugh ZeroMQ sockets.<\/p>\n

        \"\"<\/a><\/p>\n

        Responsiveness<\/strong>
        \nWe poll the RSS feeds at varying time intervals from 5 minutes to 12 hours depending on the feed’s past activity. Google News is crawled every two hours. All crawling is currently performed from a single machine; precautions are taken not to overload any news source with overly frequent requests.
        \nBased on articles with known time of publication, we estimate 70% of articles are fully processed by our pipeline within 3 hours of being published, and 90% are processed within 12 hours.<\/p>\n

        Data dissemination<\/strong>
        \nUpon completing the preprocessing pipeline, contiguous groups of articles are batched and each batch is stored as a gzipped file on a separate distribution server. Files get created when the corresponding batch is large enough (to avoid huge files) or contains old enough articles. End users poll the distribution server for changes using HTTP. This introduces some additional latency, but is very robust, scalable, simple to maintain and universally accessible.
        \nThe stream is freely available for research purposes. Please visit
        http:\/\/newsfeed.ijs.si\/<\/a> for technicalities about obtaining an account and using the stream (data formats, APIs).<\/p>\n

         <\/p>\n","protected":false},"excerpt":{"rendered":"

        The XLike project is about data analytics, and there can be no data analytics without data. Therefore, one of the first tasks in the project was to acquire a large-scale dataset of news data from the internet.   We set about this by creating a continuous news aggregator. This piece of software provides a real-time […]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[6,7],"tags":[],"_links":{"self":[{"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/posts\/237"}],"collection":[{"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/comments?post=237"}],"version-history":[{"count":6,"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/posts\/237\/revisions"}],"predecessor-version":[{"id":360,"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/posts\/237\/revisions\/360"}],"wp:attachment":[{"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/media?parent=237"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/categories?post=237"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/xlike.ijs.si\/wp-json\/wp\/v2\/tags?post=237"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}