top of page

US Government Takes Action for Responsible and Trustworthy AI Development


As the UK prepares for its first AI Safety summit, the US government has issued a significant directive aimed at fostering the secure and reliable advancement of Artificial Intelligence (AI). Here’s a simplified breakdown:


  • The US government is encouraging a united approach across all its departments to ensure that AI is regulated properly throughout the US.

  • AI has immense power to transform our world, enhance prosperity, and drive innovation. However, it also brings potential risks such as bias, fraud, discrimination, and misinformation.

  • For AI to truly be beneficial, it must be used responsibly and transparently, with proper legal guidelines in place to manage its potential risks and unlock its capabilities.

  • Collaboration among the government, businesses, universities, and communities is essential to achieve these objectives.

The Executive Order signifies a monumental step towards establishing a robust policy framework that aligns AI technologies with democratic values and civil liberties. It underscores the necessity of continuous efforts and collaboration among various stakeholders, including tech giants, academics, and civil society, to navigate the complexities of AI governance effectively.

For a more detailed understanding of the Executive Order, you can access the fact sheet here.

Key Highlights

A move towards more open and ethical frameworks is being embraced, necessitating GenAI foundational models to disclose findings from rigorous safety evaluations and mitigation strategies.

Recognition of the essential need for providing resources to educators, facilitating the responsible integration and utilisation of GenAI tools such as AI tutors in their teaching methodologies.

Privacy takes a central role, with a distinct emphasis placed on protecting the data of young individuals.


A concentrated effort is being made to reduce algorithmic bias, which is inherently present in all existing GenAI foundational models in the current market. This focus aims to promote fairness and objectivity in the outcomes produced by these models.


A significant emphasis is placed on watermarking and identifying content generated by AI or synthetic means. This approach is crucial to move beyond the current discourse that simplistically categorises AI as deceptive or easily identifiable, promoting a more nuanced understanding and handling of AI-generated content.


To promote responsible AI use in education, the US Secretary of Education must develop resources, policies, and guidance within a year. These should focus on the safe and nondiscriminatory use of AI, considering its impact on vulnerable communities. The development should involve relevant stakeholders and include an "AI toolkit" based on recommendations from the US Department of Education’s report. This toolkit should guide education leaders on human review of AI decisions, designing trustworthy and safe AI systems in compliance with privacy laws, and establishing specific guidelines for educational contexts. What it is lacking that needs to be addressed


There's a lack of strong calls for better AI literacy training. Such training is essential as it teaches people how to use AI technologies ethically and responsibly. Without it, there's a risk of people misusing AI.


The framework should focus more on providing equal access to technological tools for everyone, everywhere. This promotes fairness and inclusivity, allowing people from all backgrounds to benefit from AI technologies.


There's a need for clearer guidance on how current foundational models will adapt to new safety and transparency guidelines. Clear instructions are crucial for updating existing models to meet new standards, ensuring the safe and transparent use of AI technologies, which is vital for user trust and reliability.

A call to action to take into account the impact of GenAI chatbots/synthetic relationships such as SnapChat's "My AI", and Facebook's "Billie" and others. These tools should be designed considering the unique needs of the youth, ensuring the technology used is suitable, and safe, and enhances their development and learning.


The roadmap has been laid out, but the real challenge lies in its execution.


There is optimism that we are progressing towards a future where the development, deployment, and adoption of these revolutionary tools are conducted responsibly and ethically, ensuring that they bring about positive impacts on society and individuals. It's great to have general guidance, but the real test will be its implementation. Here at Safe on Social, we will continue to focus on how we are assisting educators in teaching AI literacy, including ethics and safe use through tools to enhance classroom and learning outcomes whilst attempting to close the ever-widening digital divide.


-------------


For information on how we can assist your organisation including keynote bookings click here

For more information on our School and business AI Programs click here

To purchase the first of our AI Lesson Packs for just $89+GST for a whole school license click here



241 views0 comments

コメント


bottom of page