Ollama AI News List | Blockchain.News
AI News List

List of AI News about Ollama

Time Details
2026-04-02
16:08
Google’s Gemma Now Apache 2.0: 400M Downloads, 100K Variants — Latest Business Impact Analysis

According to Demis Hassabis on X, Google’s Gemma family is now available under the Apache 2.0 license in Google AI Studio, with model weights downloadable from Hugging Face, Kaggle, and Ollama, alongside a reported 400 million downloads and 100,000 variants to date. As reported by Google’s official blog, the Apache 2.0 licensing materially lowers friction for commercial use, enabling enterprises to fine tune, deploy on premises, and embed Gemma in products without restrictive terms, expanding opportunities for cost-efficient inference and edge deployment. According to Google’s announcement page, distribution across Hugging Face and Ollama streamlines multi-platform serving and local inference, while Kaggle access supports rapid prototyping and education pipelines. As reported by Google, centralized resources on the Gemma page outline model cards and safety guidance, which reduces integration risk for regulated industries by clarifying usage boundaries and evaluation protocols.

Source
2026-03-13
04:37
OpenClaw v2026.3.12 Release: Dashboard v2, Fast Mode, Plugin Architecture for Ollama SGLang vLLM, and Ephemeral Device Tokens

According to OpenClaw on Twitter, the v2026.3.12 release introduces Dashboard v2 with a streamlined control UI, a new /fast mode to speed model interactions, and a plugin-based integration path for Ollama, SGLang, and vLLM that trims the core footprint, enhancing modularity and maintainability (source: OpenClaw Twitter; release notes on GitHub). According to the GitHub release notes, device tokens are now ephemeral to reduce long-lived credential risk, and cron plus Windows reliability fixes address scheduled task stability and cross-platform uptime for on-prem and self-hosted AI deployments (source: GitHub OpenClaw releases). As reported by OpenClaw, these updates target faster inference routing, safer authentication, and easier backend swapping—key for teams orchestrating local LLMs and inference servers in production environments (source: OpenClaw Twitter).

Source