OpenAI ramps up enterprise support with a focus on security, control, and cost

The new features are designed to give businesses more control, enhance security, and offer cost-effective options.

openai
Andrew Neel

OpenAI, known for its large language model ChatGPT, is making a strong push for the enterprise market. In what could intensify competition among enterprise AI players, the company announced a slew of new features designed to give businesses more control, enhance security, and offer cost-effective options when integrating OpenAI’s AI technologies in their operations.

“We’re deepening our support for enterprises with new features that are useful for both large businesses and any developers who are scaling quickly on our platform,” OpenAI said.

This move positions OpenAI as a serious player in the growing market for large language model (LLM) APIs, currently dominated by Meta’s open-source language model Llama, which saw new features in its upgraded version Llama3.

“In terms of competitive advantages, OpenAI's commitment to openness, collaboration, and ethical AI development sets it apart,” said Pradeepta Mishra, an AI expert, published author on LLM, and co-founder of data privacy firm Data Safeguard. “By releasing state-of-the-art models like GPT and fostering a community-driven approach to AI research, OpenAI has garnered significant attention and adoption in the AI ecosystem.”

Besides, OpenAI’s focus on enterprise solutions can accelerate the adoption of LLMs among businesses, Mishra said.  “While OpenAI's targeting of enterprises can drive faster adoption of LLMs in the business world, businesses are likely to approach adoption with a mix of enthusiasm and caution, considering various factors such as trust, customization, regulatory compliance, cost, risk management, and ethical considerations,” Mishra cautioned.

Security first: Protecting sensitive data

Among key upgrades is the introduction of Private Link, a feature that enables direct communication between a customer’s cloud infrastructure, for example, Microsoft Azure and Open AI while minimizing its exposure to the open internet thus reducing cybersecurity threats. Besides, it now offers multi-factor authentication (MFA) which offers an added layer of security for user accounts.

“These are new additions to our existing stack of enterprise security features including SOC 2 Type II certification, single sign-on (SSO), data encryption at rest using AES-256 and in transit using TLS 1.2, and role-based access controls,” OpenAI said in the announcement. “We also offer Business Associate Agreements for healthcare companies that require HIPAA compliance and a zero data retention policy for API customers with a qualifying use case.”

Taking control: granular oversight and cost management

OpenAI’s new Projects feature could be a game-changer for businesses managing multiple projects. It enables organizations to have granular control over individual projects and oversight over individual projects, the statement explained.

“This includes the ability to scope roles and API keys to specific projects, restrict/allow which models to make available, and set usage- and rate-based limits to give access and avoid unexpected overages. Project owners will also have the ability to create service account API keys, which give access to projects without being tied to an individual user,” the company said.

OpenAI has also announced “two new ways” to help businesses manage their expenses while scaling up their AI usage.  It has offered discounted usage (up to 50%) options for committed throughput and reduced costs for asynchronous workloads.

“Customers with a sustained level of tokens per minute (TPM) usage on GPT-4 or GPT-4 Turbo can request access to provisioned throughput to get discounts ranging from 10–50% based on the size of the commitment,” the announcement mentioned. Similarly, the company has also introduced a Batch API, at a 50% reduced price, specifically for non-urgent tasks. “This is ideal for use cases like model evaluation, offline classification, summarization, and synthetic data generation,” the company said.

Enhanced assistant capabilities for developers

The company also said that, with these upgrades, developers working with OpenAI’s Assistant API will benefit from several improvements. Notably among them is “improved retrieval with ‘file_search’ which can ingest up to 10,000 files per assistant — a 500x increase from the previous file limit of 20,” the company added. “The tool is faster, supports parallel queries through multi-threaded searches, and has enhanced reranking and query rewriting.”

The company also said that it will “keep adding new features focused on enterprise-grade security, administrative controls, and cost management.” This would raise the bar for competition in the LLM space going forward, Data Safeguard’s Mishra hopes. “OpenAI's focus on enterprise solutions can accelerate the adoption of LLMs among businesses.”

Related:

Copyright © 2024 IDG Communications, Inc.