Florida Marketing News
SEE OTHER BRANDS

Fresh news on media and advertising in Florida

Fortune 500 companies use AI, but security rules are still under construction

AI is no longer a niche technology — it’s becoming a fundamental part of business strategy for most Fortune 500 companies in 2025. All of them are now using AI, but they differ in their approaches to implementing it. Cybernews researchers warn of the risks involved as the rulebooks have yet to be written.

AI is already integrated with core operations, from customer services to strategic decision-making. And this comes with some significant risks.

“While big companies are quick to jump to the AI bandwagon, the risk management part is lagging behind. Companies are left exposed to the new risks associated with AI,” Aras Nazarovas, a senior security researcher at Cybernews, warns.

What does AI find about AI on Fortune 500 companies’ websites?

Cybernews researchers analyzed websites of Fortune 500 companies and found that a third of companies (33.5%) focus on broad AI and big data capabilities rather than specific LLMs.  They highlighted AI for general purposes like data analysis, pattern recognition, system optimization, and others.

More than a fifth of companies (22%) emphasized their AI adoption for a functional application across various specific domains. These entries describe how AI is being used to address business problems, such as inventory optimization, predictive maintenance, or customer service.

For example, dozens of companies already explicitly mention using AI for customer service, chatbots, virtual assistants, or related customer interaction automation. Similarly, companies say they use AI to automate “entry-level positions” in areas like inventory management, data entry, and basic process automation. 

Some companies like to take things into their own hands, developing proprietary models. Around 14% of companies specified their own internal or proprietary LLMs as a focus, such as Walmart’s Wallaby or Saudi Aramco’s Metabrain.

“This approach is particularly prevalent in industries like Energy and Finance, where specialized applications, data control, and intellectual property are key concerns,” Nazarovas noted.

A similar number of companies gave AI strategic importance, indicating AI integration within an organization’s overall strategy.

Fewer companies, only around 5%, proudly declare reliance on external LLM services from third-party providers, leveraging providers like OpenAI, DeepSeek AI, Anthropic, Google, and others.

However, there are also a tenth of the companies that only vaguely mention AI use, without specifying the actual product or its use.

“While only a few companies (~4%) mention a hybrid or multiple approach towards AI, blending proprietary, open source, third-party, and other solutions, it is likely that this approach is more prevalent as the experimentation phase is still ongoing,” Nazarovas notes. 

The data suggests companies often don’t want to explicitly name their use of AI tools. Only 21 companies mention the use of OpenAI, DeepSeek (19), Nvidia (14), Google (8), Anthropic (7), META Llama (6), and less for Cohere and others.

Meanwhile, for comparison, Microsoft boasts that over 85% of Fortune 500 companies utilize its AI solutions. Other reports suggest that 92% of the 500 companies use OpenAI products.

AI is here, and so are the risks

YouTube’s algorithm recently flagged tech reviewer and developer Jeff Geerling’s video for violating community guidelines. The automated service determined that the content “describes how to get unauthorized or free access to audio or audiovisual content, software, subscription services, or games.”

The problem is that the YouTuber never described “any of that stuff.” He appealed, but his appeal was rejected. However, after some noise on social media, the video was later reinstated after what Geerling presumes was “a human review process.”

Many smaller creators might never get similar treatment. 

This story is just the tip of the iceberg of the risks of AI adoption. Cybernews researchers listed many more:

  • Data Security/leakage: This is the most commonly mentioned security concern, appearing in a significant number of entries across all industries. Issues related to protecting sensitive data, including personally identifiable information (PII), health information, and operational data, are consistently highlighted.

  • Prompt injection: Vulnerabilities associated with prompt manipulation and insecure inputs are also frequently noted, particularly in the context of chatbots, search engines, and other interactive AI systems.

  • Model integrity/poisoning: Concerns about the integrity of LLMs and the potential for poisoning training data are present, especially for proprietary models. This includes risks related to biased outputs and manipulated model behavior.

  • Critical infrastructure vulnerabilities: For organizations operating in critical infrastructure sectors (e.g., energy, utilities), the security of AI integrated into control systems and operational technologies is a major risk.

  • Intellectual property theft: Protecting proprietary LLMs, algorithms, and AI-related intellectual property is a concern, particularly for companies investing heavily in internal AI development.

  • Supply chain/external risks: Risks associated with third-party LLM providers, partner LLMs, and the broader AI supply chain are also mentioned, highlighting the need for secure vendor management and risk assessment.

  • Bias/algorithmic bias: Concerns about bias in LLM outputs and algorithmic decision-making are present, emphasizing the need for fairness and ethical considerations in AI development and deployment.

  • Insecure output: Risks related to LLMs generating harmful, misleading, or insecure outputs are noted, particularly in applications where the AI's response directly impacts users or systems.

  • Lack of transparency/governance: Issues related to the lack of transparency in LLM decision-making processes and the need for robust AI governance frameworks are also highlighted.

“Critical infrastructure and healthcare sectors, for example, often face unique and heightened security vulnerabilities,” Nazarovas said.

“As companies start to grapple with new challenges and risks, it’s likely to have significant implications for consumers, industries, and the broader economy in the coming years.”

Reckless AI adoption

“AI was adopted rapidly across enterprises, long before serious attention was paid to its security. It is like a wunderkind raised without supervision—brilliant but reckless. In environments without proper governance, it can expose sensitive data, introduce shadow tools or act on poisoned inputs. Fortune 500 companies have embraced AI, but the rulebook is still being written,” says Emanuelis Norbutas, Chief Technology Officer at nexos.ai.

Emanuelis adds: “As adoption deepens, securing model access alone is not enough. Organizations need to control how AI is used in practice — from setting input and output boundaries to enforcing role-based permissions and tracking how data flows through these systems. Without that layer of structured oversight, the gap between innovation and risk will only grow wider.”

Common strategies to mitigate the risk

The regulation of artificial intelligence (AI) in the US is currently a mix of federal and state efforts, with no comprehensive federal law yet established.

Several frameworks and standards are emerging to address AI and LLM security.

The National Institute of Standards and Technology (NIST) has released the AI Risk Management Framework (AI RMF), which provides guidance on managing risks associated with AI for individuals, organizations, and society.

The EU has passed the AI Act, a regulation aiming to establish a legal framework for AI in the European Union. The act raises requirements for high-risk AI systems, including security and transparency obligations.

ISO/IEC 42001 is another international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). It focuses on managing risks and ensuring responsible AI development and use.

“The problem with frameworks is that AI's rapid evolution outpaces current frameworks and presents additional hurdles, vague guidance, compliance challenges, and other limitations,” Nazarovas said. “Frameworks won’t always provide effective solutions to specific problems, but they surely can strain companies when enforced.”



Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms of Service