May 29, 2025
Submitted via Regulations.gov
Suzanne H. Plimpton
Reports Clearance Officer
National Science Foundation
2415 Eisenhower Avenue
Alexandria, VA 22314
RE: National Science Foundation’s Development of a 2025 National Artificial Intelligence Research & Development Strategic Plan
Dear Ms. Plimpton:
On behalf of the more than 3 million members of the National Education Association (NEA), we submit the following response to the National Science Foundation’s request for information on the development of an artificial intelligence (AI) research and development (R&D) strategic plan. As AI continues to reshape aspects of society, we highly encourage the NSF and the federals government to consider the increasing impact that AI will play in the education sector, in learning environments, and on educators, students, and families across the country.
In accordance with the notice, “This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.”
At the core of our recommendations, we believe students and educators remain at the center of education. The NEA envisions AI-enhanced technologies as an aid to public educators and education, not as a replacement for meaningful and necessary human connection. The use of AI should not displace of impair the connection between students and educators, a connection that is essential to fostering academic success, critical thinking, interpersonal and social skills, emotional well-being, creativity, and the ability to fully participate in society. AI should be a tool to promote educator-guided innovation, uphold student privacy, advance equity, and ensure evidence-based practices are used in all learning environments.
Educator Involvement in AI Development & Implementation
Educators are on the front line of integrating AI into classrooms, but they are often excluded from decision-making processes related to AI tool development. Educator involvement is critical to ensuring that AI is implemented in ways that are effective, accurate, and appropriate for learners at all levels.
R&D should prioritize human-centered, educator-involved AI systems that amplify the expertise of educators and reflect best practices. Funding should support approaches that bring educators into the research process as co-creators as opposed to solely end-users. An example of this could be funding of research hubs that embed educators, researchers, and developers in collaborative design and evaluation of AI tools.
Fairness and Accessibility in AI Integration
AI tools in education must be developed and deployed in ways that ensure fairness and accessibility for all students, regardless of socioeconomic background, race, disability, or zip code. Many underserved communities continue to face barriers to accessing quality education, including limited access to technological resources. It is important that education systems are able to not only provide AI tools but also guarantee the technical support, devices, and internet infrastructure necessary to reliably access and use AI in both the classroom and at home. Additionally, algorithmic bias can reinforce many of these structural inequities.
The NSF should fund research to detect, mitigate, and monitor bias across race, gender, language, disability, and socioeconomic status in AI-driven assessments and tools to help close opportunity gaps.
Student & Educator Data Privacy Protections
AI systems in education must be guided by strong ethical principles, including transparency, accountability, and student and educator data privacy. As the use of AI tools for learning and assessment has grown exponentially, the risk of data misuse and abuse has also increased. AI systems should be designed with robust encryption, minimal data collection, and clear protocols for data storage. Educators, students, and their families must have full transparency around what data is collected, how it is used, and who has access to it. There must be clear guidelines in place to ensure compliance with existing privacy laws such as the Family Educational Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Act (COPPA).
The proliferation of deepfake technology poses significant risks, especially for students, who may be more vulnerable to misinformation, manipulation, and exploitation. These fabricated media can create harmful content that damages reputations, invades privacy, and poses serious risks to the mental health and safety of young people. Additionally, the rise of AI companion technology raises concerns about privacy and the potential for artificial interactions to exploit users’ emotional vulnerabilities or sensitive data.
The NEA recommends that the NSF invest in privacy AI research, establish clear federal standards for data governance, and ensure AI systems do not normalize surveillance of students and educators.
Adoption of Evidence-Based AI Technology
AI should only be adopted once there is data supporting a tool’s appropriateness and efficacy with potential users, and, for instruction-focused AI, its alignment with high-quality teaching and learning standards and practices. AI tools should be tested in real-world educational settings to evaluate their effectiveness, ensure they enhance learning outcomes, and prevent unintended negative consequences. Various types of tools – including but not limited to adaptive learning platforms, intelligent tutoring systems, AI-powered writing assistants, language learning tools, classroom management tools – are regularly used in classrooms across the country. It’s critical that evidence supporting specific tools comes from either research conducted and reviewed by independent researchers or from industry-sponsored research that adheres to the same standards of methodology and peer review as independent research. Where such research is unavailable, AI should be adopted on a pilot or trial basis if the evidence is being collected and analyzed in a timely manner, with an agreement in place to cease the use of the technology if the research does not show the intended benefits toward educational goals.
The NSF should fund rigorous educator-involved research and real world pilot programs to assess AI’s impact on student outcomes and educator workload. These could include district-university partnerships to pilot AI tools in real classrooms, with safeguards for data use and transparency of outcomes. It is critical that AI is grounded in evidence that is clear and transparent, with a commitment to discontinue use where benefits are not demonstrable.
AI Literacy & Ongoing Professional Development for Students & Educators
Effective, safe, and equitable use of AI technology in education requires that students and educators become fully AI literate and develop a greater sense of agency with this technology. AI literacy must be part of every student’s basic education and every educator’s professional preparation and development. Curricular changes should be made to incorporate AI literacy across all subject areas and educational levels so that all students understand the benefits, risks, and effective uses of these tools. Educators must also be afforded high-quality, multifaceted, ongoing professional learning opportunities that help increase their AI literacy and understand what, how, and why specific AI is being used in their educational settings.
The NEA recommends the NSF support research into AI literacy curricula across K-12 and into educator preparation programs. Professional development must help educators understand the potential, risks, and limitations of AI and empower them to guide students to utilize these technologies.
Energy Consumption & Environmental Impact of AI
The rapid advancement of AI technologies comes with significant energy consumption, raising concerns about sustainability and the impact on the environment. Although these technologies operate in virtual spaces, AI and the cloud are only going to continue to consume increasing amounts of energy and require larger quantities of natural resources, which will likely increase greenhouse gas emissions. As AI becomes more integrated into education and society, it is imperative to develop sustainable AI solutions that prioritize environmental responsibility.
The NSF should invest in research focused on energy-efficient AI models. Additionally, partnerships between AI researchers and environmental scientists should be encouraged to identify strategies that make AI development more sustainable. Educational institutions should be provided with best practices for minimizing their carbon footprint while leveraging the benefits of AI tools.
AI for Students with Disabilities
Artificial intelligence-enabled systems offer many potential opportunities for disability inclusion and independence, revolutionizing assistive technologies. AI tools can empower individuals with disabilities to meet personal needs, enhance personal mobility, and support communication through eye-tracking and voice-recognition software, among other benefits. The adaptive nature of AI provides a pathway to address specific individual needs, significantly expanding possibilities for reasonable accommodations for both students and educators. It is critical that AI resources are developed for students with diverse learning styles. Actively involving people with disabilities in the development, design, and maintenance of AI systems ensures technology that is not only compliant with accessibility standards but also genuinely user-centric, considering the unique challenges and needs of individuals with disabilities.
The NSF should fund R&D on assistive AI technologies and require user-centered design processes that involve students with disabilities and their educators. It is imperative that AI support the full participation of learners with diverse needs.
The NEA respectfully submits the above comments for consideration in response to this request for information regarding the NSF’s development of an AI action plan. We look forward to collaborating with the NSF in the future in its work around AI.
Sincerely,
Daaiyah Bilal-Threats
Senior Director, Education Policy and Implementation Center
National Education Association