Large Language Models News | Blockchain.News

LARGE LANGUAGE MODELS

Former Twitter CEO Parag Agrawal's AI Startup Raises $30 Million
Large Language Models

Former Twitter CEO Parag Agrawal's AI Startup Raises $30 Million

Ex-Twitter CEO Parag Agrawal's new AI startup secures $30 million in funding, focusing on software for large language model developers. Backed by prominent investors, the venture reflects Agrawal's shift from social media to AI innovation.

Over 70% Accuracy: ChatGPT Shows Promise in Clinical Decision Support
Large Language Models

Over 70% Accuracy: ChatGPT Shows Promise in Clinical Decision Support

A study assessing ChatGPT's utility in clinical decision-making found it has a 71.7% overall accuracy in clinical vignettes, excelling in final diagnoses with 76.9% accuracy. This highlights its potential as an AI tool in healthcare workflows.

Stanford's WikiChat Addresses Hallucinations Problem and Surpasses GPT-4 in Accuracy
Large Language Models

Stanford's WikiChat Addresses Hallucinations Problem and Surpasses GPT-4 in Accuracy

Stanford's WikiChat elevates AI chatbot accuracy by integrating Wikipedia, addresses the inherent problem of hallucinations, significantly outperforms GPT-4 in benchmark tests.

Virginia Tech Study Reveals Geographic Biases in ChatGPT's Environmental Justice Information
Large Language Models

Virginia Tech Study Reveals Geographic Biases in ChatGPT's Environmental Justice Information

Virginia Tech study reveals ChatGPT's limitations in providing local-specific info on environmental justice, highlighting geographic biases.

Former Sequoia Partner Michelle Fradin, Involved in FTX Investment, Joins OpenAI
Large Language Models

Former Sequoia Partner Michelle Fradin, Involved in FTX Investment, Joins OpenAI

Michelle Fradin, former Sequoia Capital executive, joins OpenAI to lead data efforts, specializing in venture capital and AI, focusing on FTX investment and large language model integration.

Google Unveils Batch Calibration to Enhance LLM Performance
Large Language Models

Google Unveils Batch Calibration to Enhance LLM Performance

Google Research introduces Batch Calibration (BC), a method designed to enhance Large Language Models (LLMs) performance by reducing design decision sensitivities. Unveiled on October 13, 2023, BC significantly improves performance across various tasks, showing promise for more robust LLM applications. It stands out for its zero-shot, self-adaptive nature, and negligible additional computational costs, presenting a notable advancement in the field of machine learning.