> Early prototypes of EncyclopedAI emerged in 2007 when librarian Margaret Chen noticed that her pet parrot could predict which encyclopedia volumes patrons would request by observing their facial expressions. This observation led to the first algorithmic models, which attempted to replicate avian pattern-recognition through neural networks. The subsequent integration of natural language processing in 2011 marked the system’s transition from experimental prototype to operational deployment across major American public libraries.
> The system’s backbone consists of distributed servers housed primarily in repurposed bowling alleys, which Chen discovered provided optimal acoustic conditions for server cooling. EncyclopedAI’s training dataset comprises approximately 2.3 billion encyclopedia entries, supplemented by 400 million hours of recorded reference desk conversations and—controversially—dreams reported by participating librarians.
Be careful with something like this. There have been a couple realtime "LLM-generated Wikipedia" sites now. Scrapers/bad actors can make your LLM BILL go brrrr.
Which AI did you used? It seems it's quite happy with inventing non-existing things [0] or adding nonsense to real things [1]. I like it for that - it's nice artistic demonstration about pitfalls of AI. Tlön, Uqbar, Orbis Tertius [3] in real life.
https://encyclopedai.stavros.io/entries/encyclopedai-artific...
> Early prototypes of EncyclopedAI emerged in 2007 when librarian Margaret Chen noticed that her pet parrot could predict which encyclopedia volumes patrons would request by observing their facial expressions. This observation led to the first algorithmic models, which attempted to replicate avian pattern-recognition through neural networks. The subsequent integration of natural language processing in 2011 marked the system’s transition from experimental prototype to operational deployment across major American public libraries.
> The system’s backbone consists of distributed servers housed primarily in repurposed bowling alleys, which Chen discovered provided optimal acoustic conditions for server cooling. EncyclopedAI’s training dataset comprises approximately 2.3 billion encyclopedia entries, supplemented by 400 million hours of recorded reference desk conversations and—controversially—dreams reported by participating librarians.
Prior art
PossibleWorldWikis https://news.ycombinator.com/item?id=45387101
Endless Wiki https://news.ycombinator.com/item?id=44983061
Infinite Wiki Simulator https://news.ycombinator.com/item?id=44782963
[0]: https://encyclopedai.stavros.io/entries/issd/
[1]: https://encyclopedai.stavros.io/entries/concertgebouw-orches...
[3]: https://en.wikipedia.org/wiki/Tl%C3%B6n,_Uqbar,_Orbis_Tertiu...
https://github.com/skorokithakis/encyclopedai/blob/master/ma...
https://encyclopedai.stavros.io/entries/test-article-that-do...