The Future and Necessity of AI Regulation

Artificial intelligence’s rapid rise has revolutionized industries by generating content quickly and efficiently, particularly with tools like ChatGPT.

Published by

 on 

October 24, 2024

Inquiry-driven, this project may reflect personal views, aiming to enrich problem-related discourse.

HeadingHeading 3

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Support

Artificial intelligence’s rapid rise has revolutionized industries by generating content quickly and efficiently, particularly with tools like ChatGPT. Especially with its recent extension to consumers, ordinary people now have the ability to produce long texts and images, and analyze complex data (Pimentel). While many businesses have embraced AI for its innovative potential, this sudden explosion has also sparked discourse over its safety. AI’s capacity for generating synthetic content and spreading misinformation is causing many critics to speak out, worrying that the negatives of AI use may outweigh its benefits. Due to AI’s fast-paced growth, we must urge our government to regulate the use of AI, helping to hold companies and developers accountable for their use of data, protect intellectual property from being used without consent, mitigate prevalent privacy concerns, and limit the perpetuation of bias. 

A quintessential challenge of AI regulation is how difficult it is to keep pace with its rapid advancements. AI systems evolve faster than regulatory bodies can draft and enforce rules, creating a gap between innovation and any potential governance. For example, while AI was initially focused on narrow tasks like facial recognition, it has quickly expanded to encompass complex decision-making in fields like autonomous vehicles and healthcare. This rapid evolution can be attributed to the special interests of corporations such as OpenAI, Microsoft, and Google engaging in an AI race, looking to upstage one another with new innovations. However, this race left unchecked can become incredibly reckless; however, the pace of this development limits what can be regulated, leaving the door open for immense potential harm (Wheeler). This constant and continuing acceleration means delaying regulation can make oversight even more difficult later on. Since AI adapts and learns autonomously, even a slight delay in regulation could allow harmful practices to solidify and become widespread. Addressing this issue now is crucial to avoid governance gaps that could result in significant social, economic, and ethical consequences​, further asserting the urgency of this matter. 

Without early intervention, unregulated AI could create unforeseen risks, such as the breach of data, amplification of bias, invasion of privacy, and potential misuse of findings and data. Unfortunately, the tried-and-true method of self-regulation does not seem to be a suitable solution for this dilemma; compared to the precedent of market concentration, invasion of privacy, and spread of misinformation under our current approach, we must find a better course of action (Wheeler). Considering these factors, it is important that we begin regulation immediately, ensuring that we maintain specialists in artificial intelligence who have deep knowledge of these systems to be our regulators while keeping our solutions flexible and adaptive, ensuring that they can keep up with the neverending development of artificial intelligence (Google). 

Problems:

When envisioning the policy of our future, we first need to consider who the burden of regulation should fall upon. As more power falls into the hands of BigTech companies like Google, Amazon, and Meta– with their increasing control over the integral aspects of our lives– they must also adopt the accompanying responsibility of this power, remaining ethical and transparent with their practices (van Rijimeman). However, as seen in previous instances and the state of the Internet (specifically social media) today, these technology companies have not dealt with the task of accountability well, often deflecting blame and hiding shady practices, with whistleblowers as our only form of defense. Unfortunately, governmental regulation will fall short, as “...law advances incrementally while technology evolves exponentially” (Levin). It is evident that there needs to be some type of balance, with both governments and companies taking responsibility for regulating artificial intelligence; the question is, how much burden should fall upon each side? The second facet of accountability is who to attribute responsibility to for errors and misinformation, especially when the stakes are so high. Is the user wrong for entering the wrong input; the programmer for their algorithm; the owner for their practices and lack of foresight; or just the AI? Furthermore, it is unclear how we can take legal action against an artificial intelligence model if citizens are wrong, or which party can be charged with the AI’s mistakes. Unfortunately, because artificial intelligence models are not conscious beings, they cannot be held liable for their mistakes in the same way as humans, meaning there must be a human to accept this liability (Syed). Currently, the landscape remains incredibly obscure, which should also be addressed in our policy.  

The next issue that currently exists is how artificial intelligence models utilize existing works, basing their new creations on these pre-existing pieces. It has been an incredible topic of contention in regard to art, literature, photography, etc., violating existing copyright laws that were not created to deal with this vague line of an AI’s “original work” (Marr). For instance, the case of AI-generated artwork that closely resembles the style of renowned artists, like the controversial "Edmond de Belamy," has sparked debates about intellectual property rights. Pieces created by a group called the Obvious Collective were based on a dataset of portraits spanning six centuries, raising concerns about whether the resulting image constituted a new, original work or an infringement on the artists whose styles were used (Bailey). To combat this issue, it is clear that we need to update our copyright laws to encompass the broader scope of AI art, ensuring clearer attribution and fair compensation.

Privacy concerns have been a concern many critics have vocalized since the beginning, as due to complicated neural network systems, not even programmers understand what machine learning algorithms are doing with our data. This obscurity was starkly illustrated by the controversy surrounding the Cambridge Analytica scandal in 2014, where personal data from millions of Facebook users was harvested without their consent and used to influence a political campaign. Disregarding the complex algorithms at play, the fundamental issue was the lack of transparency, control over personal information, and deficiency of knowledge about the severity of data collected, with the clear and shocking intent of user manipulation (Cadwalladr). These cases underscore our urgent need for more stringent privacy regulations and clearer guidelines on how data is collected, used, and protected; helping to safeguard individuals' rights in an increasingly data-driven world.

Finally, an issue that underscores why it is important to take action now is the perpetuation of bias and discrimination. The reality is that our data is influenced by the bias of our population, supplementing these same biases in our machine learning models. As such, whenever AI is used in important situations including healthcare, advertising, and policing, it continues to perpetuate stereotypes long after they have been forgotten. Using AI in these situations involves incredibly high-stakes, and can be life-threatening for those affected by its bias. For example, predictive AI algorithms in healthcare can result in less accurate treatment and diagnoses for women and people of color, all due to an underrepresentation of data. Advertising– such as for job roles– can push higher-paying jobs to men than females due to the information currently available. Even in policing, utilizing historical arrest data fueled by an immense bias towards people of color, AI tools can continue patterns of racial and minority profiling (IBM). This fundamental problem boils down to the data used for its training; because it is filled with old biases, it brings back concepts from long ago and continues to reinforce them, making it hard to accomplish social change. Additionally, due to the notion that technology and AI are always right/more accurate than humans, this bias is cemented as fact, with people not taking the extra time to verify the results provided. Even more troubling is how machine learning models collect their data: most scrape data from the Internet. However, as we continue to deluge the Internet with more AI-gen content, eventually, it will begin to use its own content to train itself. Once this happens, these biases will be firmly cemented on the Internet and hard to remove, potentially making the algorithm more extreme as its data continues to be corroborated (Rao). Either way, this is an incredibly terrifying notion, as we only have a limited time to act before this bias becomes something unerasable. 

Solutions:

Now that we have established the core concerns of regulating AI, we can begin to think of ways to resolve this pressing issue. First, taking a look at who should hold responsibility, it is necessary to acknowledge how complex the challenge is, and that we require a multifaceted approach. One key strategy involves enhancing government oversight through established privacy and data protection laws, such as the California Consumer Privacy Act (CCPA). The CCPA gives consumers more power over how their data is used, ensuring they can opt out of harmful data practices and delete any personal information that is collected (California). Regulations such as this one provide a framework for protecting individual data and holding entities and corporations accountable. Additionally, a multistakeholder governance model, which involves input from industry, academic, and government members should offer a balanced approach to overseeing AI development. This can further mitigate concerns about limiting the development of artificial intelligence while ensuring no facets are left unregulated. This collaborative model helps address diverse perspectives and ensures that AI systems are designed and operated in ways that are ethical and aligned with societal values (Li). 

Next, to address the issue of fair use, we need to update our existing frameworks to ensure fair compensation and attribution to original content creators. new regulations should focus on ensuring that creators are fairly compensated when their works are used in the development of AI models. Current interpretations of "fair use" in AI systems—particularly when training models on copyrighted data—often favor tech companies. While some argue that this training falls under fair use, many contend that the system exploits original creators without proper credit or remuneration (Milano)​. Currently, it does not make sense to give AI preferential treatment in the context of its regulation; we can continue to maintain strict regulations similar to what we currently have that favor content creators rather than corporations. With this approach, we can implement stronger copyright protections for our training data, by asking developers to seek permission before using content protected under copyright laws for their training models as current precedent stands, ensuring that artists continue to maintain control. Furthermore, if artists do consent to having their work used for training, companies need to credit their work and offer them appropriate compensation. We must also update our fair use laws, outlining when content use for data falls under righteous use. While this can limit the scope of our AI tools, once a collection of materials and content that are permissible to use is gathered, it will be easy to train, utilizing this growing database. Creating a national registry for widely used AI models is also a popular policy recommendation, disclosing their training methods, training data, and the capabilities of their models. This method keeps a record of what data and resources are used during the training of an algorithm, allowing content creators to take action if their materials are utilized without their consent (Cuellar). 

Due to the nature of neural networks, privacy concerns are the hardest to mitigate and regulate. A common and easy-to-implement solution is adding noise to training data, helping to keep the data useful for analysis while masking individual information. The utilization of decentralized data for machine learning training is also a permissible solution, ensuring that local data samples are never exchanged or collected while allowing the AI to train with a variety of data sources. Continuing to strengthen and enforce similar to the California Consumer Privacy Act also permits users some control over the use of their data and the ability to take action if necessary. (Team). The same national registry– with its disclosure of training methods and capabilities– can be used for this issue as well; it advertises any improper use of data and enables users to stay educated about the use of their data (Milano). Opt-in policies are widely advocated for: by choosing to opt-in/opt-out of cookies that collect one’s data, websites and corporations would have to respect individuals’ wishes. This also enables users to specifically choose which sites can track their data, adding specificity and preventing data from being collected “...by every actor possible and every place you go” (Miller). It is also vital to be cautious about sampling methods, ensuring corporations do not over-sample or under-sample; this can be done by reweighing the value of training instances based on their label and basing their weight on frequency counts (Fernandez). Currently, organizations are taking a multi-pronged approach to their regulating frameworks, offering AI training and education, creating governing bodies and committees, and performing frequent testing on their AI models. These measures should be expanded and enforced universally through governmental policy, turning these additional practices into industry standards and expectations (CIPL).

First, ensuring that training data is diverse and representative of various demographics, cultures, and experiences helps reduce bias from the outset​ (Passwater) This helps to mitigate cases such as the incorrect diagnoses of women in healthcare, as previously mentioned. Additionally, implementing regular bias audits and testing AI models on diverse datasets can identify and correct biased outcomes before and after their deployment​. This includes establishing clear objectives and directives, collaborating with diversity experts for their valuable insights and help with designing the audit, establishing a system of ongoing assessment to quell any new biases that could arise, and implementing strategies to reduce any biases discovered (Tornone). Additionally, companies should utilize explainable AI techniques to make systems more transparent; when developers don’t have to contend with complicated and obscure neural networks, they can understand how AI reaches its result and mitigate bias within its process (“What is explainable AI?”). It is vital that we keep humans involved in the AI decision-making process, as they can catch potential biases, essential in important fields such as hiring, policing, and especially healthcare (Wang). We can also use the source of the problem to mitigate the problem itself; bias mitigation algorithms can reduce disparities by applying fairness constants across various groups (Fernandez). Finally, creating a diverse team of both developers and governance bodies can bring a variety of perspectives to the design and training of these models, fostering the principles of fairness and inclusivity throughout an AI’s development (CIPL). 

Opposing Arguments:

While AI regulation is an imperative measure governance must combat, opposing sides do have merit about the harm these regulations can provide to technological developments. Opponents of AI regulation argue that strict oversight could stifle innovation and hinder the potential benefits of AI. Regulating too soon may slow down development, preventing AI from reaching its full potential in areas like healthcare, education, and environmental protection (Liu). The rapid pace of AI innovation requires flexibility, and rigid rules could limit experimentation and breakthroughs. By imposing early regulations, governments might create barriers that discourage entrepreneurs and startups, delaying the discovery of transformative AI solutions​ and leaving little room for competition, potentially only leaving the few oligarchs that have enough resources to contend with numerous regulations with no room for smaller entities. This could also impact the economies of those that are stricter with their legislation, leaving them at a disadvantage due to their reduced ability to innovate at a rapid pace (Timis). 

Unfortunately, the risks associated with unregulated AI are far too impactful and severe to ignore. We should instead look toward crafting feasible and efficient regulations that allow for innovation to coexist with ethical safety. By beginning to regulate AI at its early stages, we can foster trust in our AI systems, and precedent displays how regulation has encouraged innovation by providing clear standards and guidelines for the future of AI, rather than restricting its growth. A similar approach could work for AI, where targeted regulations could promote accountability and encourage developers to solve complex challenges, knowing they are working within a trusted framework. By balancing regulation and innovation, AI can be developed responsibly, avoiding the long-term harm that could arise from unchecked growth. In this way, regulation is not an obstacle but a necessary tool to ensure AI’s benefits are realized safely and equitably (Tartaro). 

Summary:

The rapid advancement of AI technology necessitates immediate regulatory intervention to address its potential dangers, including a lack of accountability, intellectual property violations, privacy breaches, and the amplification of bias. If we fail to act now, it could result in long-lasting social and ethical harms that outweigh the risks of stifling innovation, and it is imperative that we carefully consider which policy options will be the most beneficial for society altogether. As a consumer, you and your community must urge policymakers to act swiftly, creating new frameworks and standards to encourage innovation without compromising our safety and values, and holding companies accountable.

Bibliography

Bailey, Jason. “Artnome.” Artnome, 15 Oct. 2018, www.artnome.com/news/2018/10/13/the-ai-art-at-christies-is-not-what-you-think. Accessed 5 Sept. 2024.

Cadwalladr, Carole, et al. “How Trump Consultants Exploited the Facebook Data of Millions (Published 2018).” The New York Times, 17 Mar. 2024, www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html. Accessed 11 Sept. 2024.

“California Consumer Privacy Act (CCPA).” State of California - Department of Justice - Office of the Attorney General, 15 Oct. 2018, oag.ca.gov/privacy/ccpa. Accessed 11 Sept. 2024.

Candelon, Francois, et al. “AI Regulation Is Coming.” Harvard Business Review, Sept. 2021, hbr.org/2021/09/ai-regulation-is-coming?err=1&page=12. Accessed 1 Sept. 2024.

CIPL. “Building Accountable AI Programs: Mapping Emerging Best Practices to the CIPL Accountability Framework.” Feb. 2024, informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_building_accountable_ai_programs_23_feb_2024.pdf. Accessed 8 Sept. 2024.

Cuellar, Mariano-Florentino, et al. “It’s Time to Create a National Registry for Large AI Models.” Carnegie Endowment for International Peace, 12 July 2023, carnegieendowment.org/posts/2023/07/its-time-to-create-a-national-registry-for-large-ai-models?lang=en. Accessed 3 Sept. 2024.

Fernandez, Franklin Cardenoso. “Bias Mitigation Strategies and Techniques for Classification Tasks.” Holisticai.com, 8 June 2024, www.holisticai.com/blog/bias-mitigation-strategies-techniques-for-classification-tasks. Accessed 11 Sept. 2024.

Google. Building a Responsible Regulatory Framework for AI. Google, 2024, https://ai.google/static/documents/building-a-responsible-regulatory-framework-for-ai.pdf. Accessed 5 Sept. 2024.

IBM. “AI Bias Examples | IBM.” Ibm.com, 16 Aug. 2023, www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples. Accessed 3 Sept. 2024.

IBM. “What Is Explainable AI (XAI)? | IBM.” Ibm.com, 29 Mar. 2023, www.ibm.com/topics/explainable-ai. Accessed 5 Sept. 2024.

Levin, Blair, and Larry Downes. “Who Is Going to Regulate AI?” Harvard Business Review, 19 May 2023, hbr.org/2023/05/who-is-going-to-regulate-ai. Accessed 3 Sept. 2024.

Li, Cathy, and World Economic Forum. “To Make the Most of AI, We Need Multistakeholder Governance.” World Economic Forum, 14 Nov. 2023, www.weforum.org/agenda/2023/11/ai-development-multistakeholder-governance/. Accessed 11 Sept. 2024.

Liu, Irene. “Can AI Companies Still Move Fast and Break Things despite Pending Regulations? - Thomson Reuters Institute.” Thomson Reuters Institute, 7 June 2023, www.thomsonreuters.com/en-us/posts/government/ai-pending-regulations/. Accessed 5 Sept. 2024.

Marr, Bernard. “Is Generative AI Stealing from Artists?” Forbes, 20 Feb. 2024, www.forbes.com/sites/bernardmarr/2023/08/08/is-generative-ai-stealing-from-artists/. Accessed 5 Sept. 2024.

Milano, Brett. “Intellectual Property Experts Discuss Fair Use in the Age of AI - Harvard Law School.” Harvard Law School, 2 Nov. 2023, hls.harvard.edu/today/intellectual-property-experts-discuss-fair-use-in-the-age-of-ai/. Accessed 5 Sept. 2024.

Miller, Katherine. “Privacy in an AI Era: How Do We Protect Our Personal Information?” Stanford HAI, Stanford University, 18 Mar. 2024, hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information. Accessed 6 Sept. 2024.

Passwater, Jules. “LinkedIn.” Linkedin.com, 5 Jan. 2024, www.linkedin.com/pulse/ensuring-diversity-ai-training-data-inclusive-hiring-passwater-jd-nezvc/. Accessed 4 Sept. 2024.

Pimentel, Brandon. “Why AI Still Needs Regulation despite Impact.” Thomson Reuters Law Blog, Feb. 2024, legal.thomsonreuters.com/blog/why-ai-still-needs-regulation-despite-impact/#:~:text=Those%20arguing%20for%20AI%20regulation,Swift%20with%20AI%20this%20month. Accessed 5 Sept. 2024.

Rao, Devika. “AI Is Cannibalizing Itself. And Creating More AI.” Theweek, The Week, 30 Aug. 2024, theweek.com/tech/ai-cannibalization-model-collapse. Accessed 4 Sept. 2024.

Syed, Robayet. “So Sue Me: Who Should Be Held Liable When AI Makes Mistakes?” Monash Lens, 29 Mar. 2023, lens.monash.edu/@politics-society/2023/03/29/1385545/so-sue-me-wholl-be-held-liable-when-ai-makes-mistakes. Accessed 2 Sept. 2024.

Tartaro, Alessio, et al. ASSESSING the IMPACT of REGULATIONS and STANDARDS on INNOVATION in the FIELD of AI. arxiv.org/pdf/2302.04110. Accessed 1 Sept. 2024.

Team, WalkMe. “WalkMe Blog.” WalkMe Blog, 28 Dec. 2023, www.walkme.com/blog/privacy-concerns-with-ai/. Accessed 11 Sept. 2024.

Timis, David Alexandru. “How to Regulate AI without Stifling Innovation.” World Economic Forum, 26 June 2023, www.weforum.org/agenda/2023/06/how-to-regulate-ai-without-stifling-innovation/. Accessed 11 Sept. 2024.

Tornone, Kate. “How to Audit AI Tools for Bias.” HR Dive, 17 July 2023, www.hrdive.com/news/how-to-conduct-AI-bias-audit/687024/. Accessed 10 Sept. 2024.

van Rijimenam, Mark. “Privacy in the Age of AI: Risks, Challenges and Solutions.” Dr Mark van Rijmenam, CSP | Strategic Futurist Speaker, Dr Mark van Rijmenam, CSP | Strategic Futurist Speaker, 16 Feb. 2023, www.thedigitalspeaker.com/privacy-age-ai-risks-challenges-solutions/. Accessed 6 Sept. 2024.

Wang, Ge. “Humans in the Loop: The Design of Interactive AI Systems.” Stanford HAI, Stanford University, 21 Oct. 2019, hai.stanford.edu/news/humans-loop-design-interactive-ai-systems. Accessed 3 Sept. 2024.

Wheeler, Tom. “The Three Challenges of AI Regulation.” Brookings, 15 June 2023, www.brookings.edu/articles/the-three-challenges-of-ai-regulation/. Accessed 5 Sept. 2024.

Saanvi Goel

2024 Summer Fellow

Author's Profile