Check out the Israel Data Stack: the ultimate resource for Israeli data companies.

Navigating the AI Regulatory Landscape: A Comparative Analysis of US and EU Regulations for Investors & Startup Founders

The AI Regulatory Landscape in the US and the EU
Guy Fighel
January 31, 2024

The AI factor is playing out daily in the tech ventures and investment decisions we’re coming across. Building AI companies - and investing in them - require staying steps ahead of evolving guidelines.


This article will explore the current AI regulatory frameworks in the United States and the European Union, shedding light on the nuanced challenges and opportunities tech investors and startup founders can expect.

The AI Regulatory Landscape in the US and the EU

IP concerns, job replacement, security and safety are all discussed with flair by the media. Talk of AI regulatory measures comes with those concerns, and while both the US and the EU take AI regulation seriously, agencies in these regions have taken slightly different approaches.

AI regulation in the United States

Various federal agencies, such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST), have taken responsibility for regulating AI technologies in the US. Last fall, President Biden also introduced guidelines and standards for the industry. Let’s take a closer look.


The FTC regulates deceptive and unfair business practices in the US, actively enforcing laws governing consumer protection and competition. Therefore, the FTC has played a pivotal role in shaping early guidelines and regulations around AI.


For example, the agency requires businesses to provide adequate proof for any claims made on the value and results produced by technologies using AI. In other words, a company must provide evidence that a product using AI technology is better than products that do not use the technology if they want to make that claim.


Another example is the FTC requires companies to disclose when consumer interactions are with AI chatbots rather than humans.


Simultaneously, the NIST takes a research-based approach to AI technology guidelines, as the agency contributes research, data, and standards guidelines. The organization is focused on cultivating trust in the design, development, use, and governance of AI technologies so that these technologies may be seen as trustworthy.


Additionally, NIST hosts the Trustworthy & Responsible AI Resource Center, providing accessible resources to support the responsible and ethical development of AI technologies.


Lastly, President Biden issued an Executive Order in October 2023 establishing new standards for AI safety and security, emphasizing privacy protection, equity, civil rights, consumer welfare, and innovation.


Among the key directives, the order requires developers of AI systems to share safety test results with the U.S. government, sets rigorous standards for safety testing, and addresses potential risks in critical infrastructure and cybersecurity. The order also calls for congressional action on bipartisan data privacy legislation, emphasizing privacy-preserving techniques and addressing algorithmic discrimination. It highlights efforts to protect Americans from AI-enabled fraud, advance responsible AI use in healthcare, education, and the workforce, and promote a fair and competitive AI ecosystem.


This executive order was a huge move forward for AI regulation, and the directives underscore the administration's commitment to responsible and effective government use of AI, promoting international collaboration, and ensuring that AI development aligns with rights, safety, and global challenges. While the actions represent significant strides, ongoing collaboration with Congress and international partners remains essential for comprehensive and bipartisan legislation on responsible AI innovation.

AI regulation in the European Union

In April 2021, the European Commission proposed the "Regulation on a European Approach for Artificial Intelligence," also known as the EU AI Act or EU AI Law. After lengthy debate, the European Parliament and Council recently reached political agreement on the act in December 2023. While the initial proposal was published in 2021, the final text of the EU AI Act has yet to be published, so the exact details of the act remain unknown to the public.


While we don’t yet know the final details, the proposed regulation aims to create a harmonized framework for AI across the EU member states, focusing on high-risk AI applications. It addresses issues such as transparency, accountability, and fundamental rights. The document defines AI while keeping the definition of AI as broad as possible to cover all of AI and accommodate the nature of the rapidly evolving industry.


A core principle of the proposed document is that the EU will classify AI systems into four highly-defined risk categories:

  1. Minimal/no risk (permitted with no restrictions)
  2. AI with specific transparency obligations (permitted with transparency obligations)
  3. High-risk (permitted with compliance requirements and ex-ante conformity assessment)
  4. Unacceptable risk (prohibited)


Most applications of AI technology fall into the first two levels of risk, with only egregious technological applications such as subliminal manipulation, exploitation of minors or disabled persons, social scoring, or biometric identification for law enforcement in public spaces being classified as unacceptable risk/prohibited.


The commission has also created the AI Pact, which encourages companies to voluntarily commit to early implementation of the proposed measures in the AI Act. The Pact will help the industry get a head start on compliance and allow participating companies to stand out as frontrunners in trustworthy innovation.

The Dawn of AI Regulation Within a New Investment Paradigm

While AI regulation is still in its early stages, both the US and the EU are prioritizing creating comprehensive guidelines. Both countries view AI technology as a beacon of innovation and progress for citizens, businesses, and public interest while taking care to protect safety, privacy, and fundamental rights.


Despite lengthy debates and careful consideration, such as what happened in the EU, we expect AI legislation to move quickly. It is essential to stay current on the latest regulations, especially as investors in the space. 

Implications of AI Regulations on Future Investments

  • Risk-Based Approach and Compliance: The EU's AI Act categorizes AI systems based on risk, with stringent requirements for high-risk and general-purpose AI systems. Investors should pay close attention to the compliance and risk mitigation strategies of AI startups, especially those dealing with high-risk or general-purpose AI.
  • Cybersecurity and Data Privacy: With increasing emphasis on data security and privacy, AI systems vulnerable to data integrity attacks or those handling sensitive personal information must demonstrate robust security measures. Investment in AI startups with strong cybersecurity protocols will be more prudent.
  • Intellectual Property and Copyright Issues: As seen in recent cases involving generative AI models, intellectual property rights are becoming a critical concern. Investors should be wary of potential legal challenges and prioritize startups with clear strategies for respecting and protecting IP rights.
  • Impact on Innovation and Market Competition: While regulations aim to safeguard against AI risks, they also have the potential to stifle innovation or create barriers to market entry. A balanced approach in evaluating startups is necessary, considering both innovation potential and regulatory adherence.

What Investors Should Look for in AI Startups

  • Alignment with Regulatory Frameworks: Startups that are proactive in aligning their AI technologies with current and anticipated regulations will be more resilient and adaptable.
  • Ethical AI and Social Responsibility: Companies that prioritize ethical AI practices, including transparency, fairness, and non-discrimination, will likely fare better in a regulated environment.
  • Diverse and Representative Data Sets: Startups using well-curated, representative datasets for training their AI models can minimize biases and adhere to regulatory demands for inclusivity and fairness.
  • Solid Cybersecurity Infrastructure: Given the rising cybersecurity threats, startups with strong security protocols for their AI systems should be a priority.
  • Sustainable and Responsible AI Development: Companies focusing on the sustainable development of AI, including energy-efficient models and responsible use of resources, align with global trends towards environmental consciousness.

Special Considerations for AI/ML Infrastructure and Open Source Models

When it comes to AI/ML infrastructure and open source models, the regulatory landscape appears less specific and immediate in its impact. This includes areas like observability systems and LLMOps. For these sectors, the current regulations are less directly applicable, offering a bit more flexibility in terms of compliance.

  • Open Source AI Models: The EU exempted open-source AI companies from most of the AI Act’s transparency requirements, unless they are developing models as computing-intensive as GPT-4. This could be a significant advantage for startups and investors focusing on open-source AI models.
  • Observability and Operations Systems: For platforms and tools involved in the monitoring and operational aspects of AI systems, like LLMOps, the regulatory environment is less stringent. These systems play a crucial role in ensuring the efficiency and reliability of AI applications but are not directly creating AI outputs, thus falling outside the scope of the most stringent regulations.

Moving Ahead in a World with Regulated AI

As AI continues to evolve, so does the landscape for tech investing. Navigating these changes requires a keen understanding of the regulatory environment, potential risks, and the ethical implications of AI technologies. 

For investors and founders, the key lies in balancing innovation with compliance, ensuring that AI advancements continue to thrive in a responsible and sustainable manner. Regardless of any regulation, we don’t think this is a major reason to start a new company with the sole intent of monitoring how a specific system complies with a regulatory need. Since the majority of newer startups will probably not be required to adhere to such standards, we question whether those types of businesses have the potential to grow into very large companies. Typically, in the discussion of regulations, standards may change rapidly and any business that would be solely relying on explaining or enforcing compliance will likely not be able to adapt quickly. 

We believe it makes more sense to focus on deeper vertical innovation across the new modern data and AI stack.

If you're building an AI/Data company, check out the Hetz Data Program, (known affectionately as SPARQL), helping founders at the ideation and early stage journey to form a successful company in the data space.