Apple adopts Biden administration’s AI safeguards

Apple Intelligence: Coming Soon, with Safety in Mind

Apple has pledged to prioritize safety in the development of its upcoming artificial intelligence (AI) features, known as Apple Intelligence. The company has joined the Artificial Intelligence Safety Institute Consortium (AISIC), a group established by the Biden-Harris administration to promote responsible and ethical AI practices.

Adherence to Safety Safeguards

By becoming a member of AISIC, Apple has committed to adhering to a set of voluntary safeguards for AI development. These safeguards include:

  • Testing AI systems for security flaws and reporting the results to the government
  • Creating mechanisms to inform users when content is AI-generated
  • Developing standards and tools to ensure the safety of AI systems

Apple Intelligence: Upcoming Features

Apple Intelligence is expected to be widely available in September with the release of iOS 18, iPadOS 18, and macOS Sequoia. Some of the anticipated features include:

  • Integration with ChatGPT, a powerful AI chatbot
  • Improved Siri functionality
  • Enhanced AI-driven recommendations in various apps

Elon Musk’s Concerns

The announcement of Apple Intelligence has raised concerns from Tesla CEO Elon Musk, who owns X and serves as the CEO of Neuralink. Musk has expressed fears about the security risks posed by Apple devices due to their potential integration with AI. However, it’s worth noting that Tesla and Neuralink are not members of AISIC.

Legal Considerations

While the AISIC safeguards are voluntary, the European Union (EU) has adopted the AI Act, a set of legally binding regulations designed to protect citizens from high-risk AI systems. The AI Act will come into effect in 2026 and will impact companies operating within the EU.

Conclusion

Apple’s commitment to safety in the development and deployment of AI is a positive step towards responsible and ethical AI practices. While the company’s upcoming AI features hold great potential, it is crucial to address the potential risks associated with AI and to implement measures to mitigate them. As the adoption of AI continues to grow, collaboration between governments, researchers, industry leaders, and the public will be essential to ensure the safe and beneficial use of AI.