AIatMeta AI News List | Blockchain.News
AI News List

List of AI News about AIatMeta

Time Details
16:09
Meta Advances On-Device AI with ExecuTorch for Meta Quest 3 and Wearables: Accelerating PyTorch AI Deployment Across Devices

According to @AIatMeta, Meta has launched ExecuTorch, an advanced on-device AI runtime now deployed across devices including Meta Quest 3, Ray-Ban Meta, Oakley Meta Vanguard, and Meta Ray-Ban Display (source: ai.meta.com/blog/executorch-reality-labs-on-device-ai). ExecuTorch streamlines the deployment of PyTorch models by removing conversion steps and enabling pre-deployment validation directly within PyTorch. This innovation shortens the research-to-production cycle, ensuring efficient and consistent AI performance across Meta’s diverse hardware ecosystem. The move opens up significant business opportunities for AI developers targeting edge devices, facilitating rapid prototyping and scalable AI solutions in consumer electronics. ExecuTorch’s integration highlights the growing trend of on-device AI, addressing latency, privacy, and energy efficiency—key factors for next-generation AR and VR devices.

Source
2025-11-20
22:49
SAM 3 Sets New Benchmark: High-Quality Dataset with 4M Phrases and 52M Object Masks Doubles AI Performance

According to @AIatMeta, the SAM 3 model has achieved double the performance compared to baseline models by leveraging a meticulously curated dataset containing 4 million unique phrases and 52 million corresponding object masks. Kate, a researcher on the SAM 3 team, highlighted that this leap in accuracy and efficiency was driven by their advanced data engine, which enabled scalable data collection and annotation at unprecedented quality and scale. This development underlines the critical importance of large, diverse datasets for next-generation AI models, particularly in segmentation and computer vision applications. The business opportunity lies in developing robust data engines and high-quality annotated datasets, which are now proven to be key differentiators for AI model performance, as evidenced by SAM 3's results (Source: @AIatMeta, Nov 20, 2025).

Source
2025-11-19
17:07
SAM 3 Unified AI Model Launches with Advanced Detection, Segmentation, and Tracking Features

According to AI at Meta, SAM 3 is a newly launched unified AI model that enables detection, segmentation, and tracking of objects across both images and videos. This next-generation model introduces highly requested features such as text and exemplar prompts, allowing users to segment all objects of a specific target category efficiently. The integration of these functionalities supports a wider range of computer vision applications, making it easier for businesses to automate image and video analysis workflows. SAM 3 represents a significant advancement in multimodal AI, offering practical opportunities for industries like retail, security, and autonomous systems to improve object recognition and streamline visual data processing (Source: @AIatMeta on Twitter, 2025-11-19).

Source
2025-11-19
16:37
Meta Unveils SAM 3D: State-of-the-Art AI Model for 3D Object and Human Reconstruction from 2D Images

According to @AIatMeta, Meta has launched SAM 3D, a cutting-edge addition to the SAM collection that delivers advanced 3D understanding of everyday images. SAM 3D features two models: SAM 3D Objects for object and scene reconstruction, and SAM 3D Body for human pose and shape estimation. Both models set a new performance benchmark by transforming static 2D images into vivid, accurate 3D reconstructions. This innovation opens significant business opportunities for sectors such as AR/VR, gaming, e-commerce visualization, robotics, and healthcare, by enabling enhanced digital twins, immersive experiences, and automation based on state-of-the-art computer vision capabilities. (Source: @AIatMeta, go.meta.me/305985)

Source
2025-11-19
16:26
Meta Releases SAM 3: Advanced Unified AI Model for Object Detection, Segmentation, and Tracking Across Images and Videos

According to @AIatMeta, Meta has launched SAM 3, a unified AI model capable of object detection, segmentation, and tracking across both images and videos. SAM 3 introduces new features such as text and exemplar prompts, allowing users to segment all objects of a specified category efficiently. These enhancements address highly requested functionalities from the AI community. The learnings from SAM 3 will directly power new features in Meta AI and IG Edits apps, empowering creators with advanced segmentation tools and expanding business opportunities for AI-driven content creation and automation. Source: @AIatMeta (https://go.meta.me/591040)

Source
2025-11-19
16:15
Meta Releases SAM 3 and SAM 3D: Advanced Segment Anything Models for AI-Powered Image, Video, and 3D Object Analysis

According to @AIatMeta, Meta has introduced a new generation of Segment Anything Models: SAM 3 and SAM 3D. SAM 3 enhances AI-driven object detection, segmentation, and tracking across images and videos, now supporting short text phrases and exemplar prompts for more intuitive workflows (source: @AIatMeta, https://go.meta.me/591040). SAM 3D extends these capabilities to 3D, enabling precise reconstruction of 3D objects and people from a single 2D image (source: @AIatMeta, https://go.meta.me/305985). These innovations present significant opportunities for developers and researchers in media content creation, computer vision, and AR/VR, streamlining complex tasks and opening new business avenues in AI-powered visual data analysis.

Source
2025-11-10
18:12
Meta Omnilingual ASR Launch Brings Automatic Speech Recognition to 1,600+ Languages, Unlocking Global AI Opportunities

According to @AIatMeta, Meta has introduced the Omnilingual Automatic Speech Recognition (ASR) suite, delivering ASR capabilities for over 1,600 languages, including 500 low-coverage languages previously unsupported by any ASR system (source: https://go.meta.me/f56b6e). This breakthrough expands AI-driven transcription and translation services to underserved populations, paving the way for inclusive digital communication, real-time voice interfaces, and new global business opportunities in sectors like education, customer service, and accessibility. With comprehensive language coverage, the Meta ASR suite positions itself as a foundation for next-generation AI applications targeting emerging markets and diverse linguistic communities.

Source
2025-09-24
21:28
Meta FAIR Releases Code World Model (CWM): 32B-Parameter AI for Advanced Code Generation and Reasoning

According to @AIatMeta, Meta FAIR has introduced the Code World Model (CWM), a 32-billion-parameter research model engineered to advance world modeling in code generation and program reasoning (source: ai.meta.com/research/publications/cwm). The release of open weights and source code under a research license enables the AI community to extend and apply CWM for sophisticated code analysis, automation, and developer productivity solutions. This move signals Meta’s commitment to open research and accelerates innovation in AI-driven software development by facilitating experimentation in world model-based code reasoning (source: huggingface.co/facebook/cwm, github.com/facebookresearch/cwm).

Source
2025-09-17
22:03
Meta Connect 2025 Keynote Reveals Future of AI Wearables and Smart Devices

According to @AIatMeta, Meta Connect 2025 is set to showcase the latest advancements in AI wearables during its keynote livestream. The event will highlight new product launches and updates in AI-powered smart devices, signaling Meta's continued investment in next-generation artificial intelligence hardware. Industry analysts expect detailed demonstrations of AI-driven personal assistants, smart glasses, and edge AI solutions, underscoring significant business opportunities in the expanding AI wearables market. The keynote aims to address growing enterprise and consumer demand for seamless AI integration in everyday devices, with a focus on real-world applications and market impact (source: @AIatMeta, Sep 17, 2025).

Source
2025-08-14
16:19
Day-0 Support for DINOv3 in Hugging Face Transformers Unlocks New AI Vision Opportunities

According to @AIatMeta, Hugging Face Transformers now offers Day-0 support for Meta's DINOv3 vision models, allowing developers and businesses immediate access to the full DINOv3 model family for advanced computer vision tasks. This integration streamlines the deployment of state-of-the-art self-supervised learning models, enabling practical applications in areas such as image classification, object detection, and feature extraction. The collaboration is expected to accelerate innovation in AI-powered visual analysis across sectors like e-commerce, healthcare, and autonomous vehicles, opening up new business opportunities for companies seeking scalable, high-performance vision AI solutions (source: @AIatMeta on Twitter, August 14, 2025).

Source
2025-08-14
16:19
Meta Releases DINOv3 for Commercial Use: Full Pre-trained Computer Vision Models and Code Available

According to @AIatMeta, Meta has released DINOv3 under a commercial license, providing the computer vision community with a comprehensive suite of pre-trained backbones, adapters, and both training and evaluation code (source: @AIatMeta, August 14, 2025). This release is designed to accelerate AI innovation and commercial adoption by making state-of-the-art self-supervised learning models easily accessible to enterprises and developers. The availability of production-ready resources opens new business opportunities for companies seeking to integrate advanced vision AI into real-world applications, such as industrial automation, medical imaging, and retail analytics.

Source
2025-08-14
16:19
DINOv3: Self-Supervised Learning for 1.7B-Image, 7B-Parameter AI Model Revolutionizes Dense Prediction Tasks

According to @AIatMeta, DINOv3 leverages self-supervised learning (SSL) to train on 1.7 billion images using a 7-billion-parameter model without the need for labeled data, which is especially impactful for annotation-scarce sectors such as satellite imagery (Source: @AIatMeta, August 14, 2025). The model achieves excellent high-resolution feature extraction and demonstrates state-of-the-art performance on dense prediction tasks, providing advanced solutions for industries requiring detailed image analysis. This development highlights significant business opportunities in sectors like remote sensing, medical imaging, and automated inspection, where labeled data is limited and high-resolution understanding is crucial.

Source
2025-08-14
16:19
DINOv3: State-of-the-Art Self-Supervised Computer Vision Model Surpasses Specialized Solutions in High-Resolution Image Recognition

According to @AIatMeta, DINOv3 is a new state-of-the-art computer vision model trained using self-supervised learning (SSL) that generates powerful, high-resolution image features. Notably, DINOv3 enables a single frozen vision backbone to outperform multiple specialized solutions across several long-standing dense prediction tasks, such as semantic segmentation and object detection. This advancement highlights significant business opportunities for organizations seeking efficient, generalizable AI vision systems, reducing the need for custom model development and enabling broader deployment of AI-powered image analytics in industries like healthcare, autonomous vehicles, and retail (Source: AI at Meta on Twitter, August 14, 2025).

Source
2025-08-11
11:20
Meta FAIR’s Brain & AI Team Wins 1st Place at Algonauts 2025 with TRIBE 1B Parameter Brain Modeling AI

According to @AIatMeta, Meta FAIR’s Brain & AI team secured first place at the Algonauts 2025 brain modeling competition with their TRIBE model, a deep neural network featuring 1 billion parameters. TRIBE (Trimodal Brain Encoder) is the first AI model specifically trained to predict human brain responses to various stimuli, marking a significant advancement in AI-powered neuroscience. This achievement demonstrates the potential for large-scale models to bridge AI and cognitive neuroscience, paving the way for new commercial applications in brain-computer interfaces, neuroimaging interpretation, and advanced neural analytics (Source: @AIatMeta, August 11, 2025).

Source
2025-08-05
12:06
Meta Releases Open Molecular Crystals (OMC25) Dataset with 25 Million Structures for AI-Driven Drug Discovery

According to AI at Meta, Meta has released the Open Molecular Crystals (OMC25) dataset, which contains 25 million molecular crystal structures, to support the FastCSP workflow for AI-powered crystal structure prediction (source: AI at Meta Twitter, August 5, 2025). This large-scale dataset enables researchers and AI developers to accelerate drug discovery, materials science, and computational chemistry by providing a comprehensive foundation for training and benchmarking generative AI models. The release of OMC25 is expected to drive innovation in the pharmaceutical and materials industries by facilitating the development of new AI algorithms for crystal structure prediction and molecular property optimization (source: Meta research paper).

Source
2025-08-05
12:06
Meta FAIR Chemistry Team Unveils FastCSP: AI-Powered Workflow Accelerates Organic Crystal Structure Discovery

According to AI at Meta, the Meta FAIR Chemistry team has announced FastCSP, a new AI-driven workflow designed to rapidly generate stable crystal structures for organic molecules. This technology significantly accelerates material discovery efforts by automating and optimizing the design of molecular crystals, reducing the time required for researchers and businesses to identify viable compounds for new materials and pharmaceuticals (source: AI at Meta, August 5, 2025). The deployment of FastCSP demonstrates how AI is transforming materials science, opening commercial opportunities in drug development, electronics, and advanced manufacturing through faster R&D cycles and improved accuracy in predicting molecular stability.

Source
2025-08-04
21:11
Meta FAIR, Georgia Tech, and Cusp AI Launch Largest Open Direct Air Capture 2025 Dataset for AI-Driven CO2 Removal Solutions

According to @MetaAI, Meta FAIR, Georgia Tech, and Cusp AI have released the Open Direct Air Capture 2025 dataset, now the largest open dataset focused on discovering advanced materials for direct air capture of CO2. This dataset empowers AI researchers and companies to rapidly and accurately screen materials for carbon capture, significantly accelerating the development of new, efficient direct air capture technologies. The availability of such a comprehensive, high-quality dataset presents immediate business opportunities for startups and enterprises aiming to apply machine learning and AI to environmental and climate tech sectors. The release is expected to drive innovation in AI-powered materials discovery and commercial applications for carbon removal (Source: @MetaAI, @GeorgiaTech, @cusp_ai on Twitter).

Source
2025-07-30
13:06
Meta Unveils Vision for Personal Superintelligence: AI for Everyone in 2025

According to @AIatMeta, Mark Zuckerberg has shared Meta’s comprehensive vision for the future of personal superintelligence, emphasizing the development of accessible AI tools designed for individual users. In his official letter published on meta.com/superintelligence, Zuckerberg outlined Meta's strategy to democratize advanced AI, making powerful personal assistants available to all users. The initiative highlights Meta’s commitment to open-source AI models, focusing on privacy, personalization, and seamless integration with daily life. This move positions Meta as a leader in the evolving AI assistant market, opening new business opportunities for developers and enterprises interested in building on Meta's expanding ecosystem (Source: @AIatMeta, 2025-07-30).

Source
2025-06-27
16:52
Meta Releases Largest Audiovisual Behavioral AI Dataset: Seamless Interaction Dataset for Human-Like Social Understanding

According to @AIatMeta, Meta has publicly released the Seamless Interaction Dataset, featuring over 4,000 participants and 4,000+ hours of interaction videos, establishing it as the largest known video dataset of its kind. This dataset is designed to support the development of advanced audiovisual behavioral AI models capable of understanding and generating human-like social interactions. For AI businesses and researchers, this release presents significant opportunities to enhance conversational AI, virtual assistants, and social robotics with improved empathy and social context awareness, using real-world, large-scale audiovisual data. Source: Meta via Twitter (June 27, 2025).

Source
2025-06-27
16:52
Meta FAIR Launches Seamless Interaction: Advanced Audiovisual AI Models for Interpersonal Dynamics

According to @AIatMeta, Meta FAIR has introduced Seamless Interaction, a research initiative focused on modeling interpersonal dynamics using state-of-the-art audiovisual behavioral models. Developed in collaboration with Meta’s Codec Avatars and Core AI labs, these models analyze and synthesize multimodal human behaviors, enabling more natural and effective virtual interactions. This breakthrough has the potential to transform AI-driven communication in virtual reality, enterprise collaboration, and customer engagement platforms by offering real-time, nuanced behavioral understanding. Verified details indicate that this project could open significant business opportunities for companies seeking to enhance virtual meeting tools and immersive experiences (Source: @AIatMeta, June 27, 2025).

Source