{"id":381431,"date":"2025-02-10T16:48:11","date_gmt":"2025-02-10T16:48:11","guid":{"rendered":"https:\/\/www.techopedia.com\/?p=381431"},"modified":"2025-02-10T16:48:11","modified_gmt":"2025-02-10T16:48:11","slug":"diffbot-ai-model-knowledge-graph-technology","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/diffbot-ai-model-knowledge-graph-technology","title":{"rendered":"Diffbot\u2019s AI Model Suggests “Smaller Is Better” for LLMs"},"content":{"rendered":"

Artificial intelligence<\/a> (AI) is nothing without data. Hence, the underlying assumption has been that whoever wins the AI race must have a pool of training data<\/a> that’s not just vast but better than others.<\/p>\n

Take OpenAI’s GPT-4<\/a> and Meta’s Llama 3.1 as examples; they run on 1.7 trillion and 405 billion parameters, respectively, and have gained popularity due to their gigantic training datasets.<\/p>\n

Each newly released large language model<\/a> (LLM) often shatters the parameter record of its predecessor, a trend that has become a central benchmark for measuring progress in LLM development on Hugging Face<\/a>.<\/p>\n

But is bigger always better? Diffbot Technologies, a California-based startup known for its knowledge graph technology, does not think so.<\/p>\n

\n

Key Takeaways<\/span><\/h2>\n