Description
PyData Berlin 2016
The development of large-scale Knowledge Base (KB) has drawn lots of attentions and efforts from both academy and industries recently . In this talk I will introduce how to use keywords and public available data to build our structural KB, and build knowledge retrieval system for different languages using python.
Many large-scale Knowledge Bases (KB), such as Yago, Wikidata, Freebase, and Google’s Knowledge Graph, have been build by extracting facts fro structural Wikipedia and/or natural language Web documents.
The main observation of using knowledge base is that not all facts are useful and have enough information. To tackle this problem I will introduce how we build various data sources to help facts and keywords selection. We will also discuss important questions of KB applications including,
- Architecture of a KB processing and extraction system using Wikipedia and two public available KB including Wikidata and Yago.
- Method for calculating contextual relevance between facts.
- How to present different facts to users.
Resources: