
Presented by HubSpot INBOUND, HubSpot's annual conference for marketing and sales professionals, took place in San Francisco this year, with three days of insights and events across marketing, sales, CX, and AI innovation. It was a mix of the new, like the Creators Corner and the Tech Stack Showcase Stage, and the familiar, like HubSpot…

IBM today announced the release of Granite 4.0, the newest generation of its homemade family of open source large language models (LLMs) designed to balance high performance with lower memory and cost requirements. Despite being one of the oldest active tech companies in the U.S. (founded in 1911, 114 years ago!), "Big Blue" as its…

Salesforce Inc. is expanding its artificial intelligence platform with new data management and governance capabilities, aiming to address what the company says is a crisis in enterprise AI adoption where more than 80% of projects fail to deliver meaningful business value. The San Francisco-based software giant announced Thursday a suite of new tools designed to…

Google wants its coding assistant, Jules, to be far more integrated into developers’ terminals than ever. The company wants to make it a more workflow-native tool, hoping that more people will use it beyond the chat interface. Jules, which the company first announced in December 2024, will gain two new features: a Jules API to…

A new study by Shanghai Jiao Tong University and SII Generative AI Research Lab (GAIR) shows that training large language models (LLMs) for complex, autonomous tasks does not require massive datasets. Their framework, LIMI (Less Is More for Intelligent Agency), builds on similar work in other areas of LLM research and finds that “machine autonomy…

OpenAI will host more than 1,500 developers at its largest annual conference on Monday, as the company behind ChatGPT seeks to maintain its edge in an increasingly competitive artificial intelligence landscape. The third annual DevDay conference at San Francisco's Fort Mason represents a critical moment for OpenAI, which has seen its dominance challenged by rapid…

Huawei’s Computing Systems Lab in Zurich has introduced a new open-source quantization method for large language models (LLMs) aimed at reducing memory demands without sacrificing output quality. The technique, called SINQ (Sinkhorn-Normalized Quantization), is designed to be fast, calibration-free, and easy to integrate into existing model workflows. The code for performing it has been made…

A cycle-accurate alternative to speculation — unifying scalar, vector and matrix compute For more than half a century, computing has relied on the Von Neumann or Harvard model. Nearly every modern chip — CPUs, GPUs and even many specialized accelerators — derives from this design. Over time, new architectures like Very Long Instruction Word (VLIW),…

In the race to automate everything – from customer service to code – AI is being heralded as a silver bullet. The narrative is seductive: AI tools that can write entire applications, streamline engineering teams and reduce the need for expensive human developers, along with hundreds of other jobs. But from my point of view…

Guest author: Or Hillel, Green Lamp Applications have become the foundation of how organisations deliver services, connect with customers, and manage important operations. Every transaction, interaction, and workflow runs on a web app, mobile interface, or API. That central role has made applications one of the most attractive and frequently-targeted points of entry for attackers.…