*This is article is a Blog post, it is not an IPTF Journal article.
ANDREW ORFALY
“Am I battling ghost or A.I.?” Kendrick Lamar rapped on “Euphoria”, his first full diss track against Drake as part of their culture shaking rap beef in 2024.[1] The week prior to “Euphoria’s” release, Drake released “Taylor Made Freestyle”, a four-minute track in which the Canadian rapper enlisted the help of A.I. recreations of the deceased Tupac Shakur and the still alive Snoop Dogg.[2] Beyond the ethically grey area in which A.I. sound-alikes dwell, they also present a threat to creators and consumers alike.[3] Limits via legislation are essential to preserve human artistic integrity.[4]
A simple Google search for A.I. cover music generator yields dozens of results, each promising tracks sung by a desired famous voice in exchange for a subscription fee.[5] Many websites specializing solely in audio-driven A.I. promise users the ability to make songs from any voice that they desire.[6] Once a sonic swindler has created their track using the vocal print of someone else, they can then upload that file to a streaming platform for mass consumption.[7] In 2026, music is mostly consumed through a digital service provider (DSP), like Apple Music or Spotify.[8]
As A.I. capability and accessibility continues to grow, it is vital that legislation provides guardrails to a technology that can cause severe damage to artistic integrity.[9] Since retaking office in 2025, President Trump has walked back restrictions imposed on the A.I. industry under President Biden.[10] The DEEPFAKES Act, for example, presents a potential tool in curbing AI’s vocal cloning ability.[11] Meanwhile, the European Union has been proactive when it comes to A.I. regulation.[12] Their A.I. Act has set hard lines that European A.I. creators are prohibited from crossing.[13]
The Trump administration’s plan, Winning the Race: America’s A.I. Action Plan seeks to eliminate barriers to A.I. growth and development.[14] Principally, the first pillar of this plan is to “remove red tape and onerous regulation.”[15] To achieve this end, the authors propose that the Federal Trade Commission (FTC) does “not advance theories of liability that unduly burden A.I. innovation.”[16]
Perhaps the most significant item to come from this action was the implementation of Executive Order 14148 which revoked President Biden’s Executive Order No. 1411.[17] While not a limit on commercial use of AI, the impositions that this order placed on the federal government’s use of artificially created content could have served as a template for commercial restrictions.[18] One such shield is put up in §4.5(a)(iii) wherein President Biden asked for a report on standards for watermarking A.I. generated content.[19] The implication of this request is that watermarks be placed on A.I. generated content coming from the federal government.[20]
One Biden-era piece of legislation targeting artificially produced content that remains untouched is the “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2023” (DEEPFAKES Accountability Act).[21] This bill’s aim is to protect individuals from content, both visual and auditory, that mimics their identity.[22] This bill does not expressly outlaw deepfakes; rather, it places rules for their dissemination.[23] For example, the bill would require any piece of altered media that imitates another person and exceeds two minutes in length to contain a label identifying it as artificially edited.[24] Per §1041(e), audio “watermarks” are to come in the form of a clear verbal disclosure at the start of any artificially edited track.[25] This requirement serves an effective purpose: inserting the disclosure at the beginning, rather than the end, of the track ensures that it is heard by the listener and eliminates the risk of the listener skipping to the next song once the beat cuts out.[26]
Whereas the United States has so far only tangentially addressed the dangers of A.I.in legislation, the European Union has begun implementing the Artificial Intelligence Act (A.I. Act), published in July 2024.[27]The A.I. Act’s division of A.I. into four categories of risk-unacceptable, high, limited, and minimal–offers an effective framework.[28] The A.I. Act, in most cases, will define deepfakes as part of the limited risk category.[29] Items under this umbrella are still required to provide notice that the content is A.I. made, like the DEEPFAKES Act’s provisions.[30] A piece of A.I. produced music becomes a high-risk usage, however, if the program creating the music was trained on copyrighted material.[31] The EU will compile a publicly accessible database of A.I. companies that are high-risk.[32] As companies prey upon consumers, it is valuable for the public to be able to understand a given A.I. system’s risk level via a government-created database.[33] Compliance is also encouraged by this practice, since no reasonable company would want to be deemed a risk.[34]
Regulating artificial intelligence cannot be a partisan issue.[35] Yet some current United States lawmakers view the rapid development of A.I. as a global race that needs to be won “[just] like we won the space race.[36] Fortunately, between the legislation––both domestic and abroad–– there is groundwork for constructively and effectively restricting A.I. and protecting the artistry of human-made music.[37]
[1] Kendrick Lamar, euphoria, on euphoria – Single (Apple Music, Interscope Recs. Apr. 30, 2024).
[2] Drake, Taylor Made Freestyle, (Apr. 20, 2024). https://www.youtube.com/watch?v=X1f6Ny_aXx4&list=RDX1f6Ny_aXx4&start_radio=1.
[3]See Eve Riskind, The Problem with AI Generated Music, Cornell Daily Sun (Dec. 6, 2024), https://www.cornellsun.com/article/2024/12/the-problem-with-ai-generated-music (explaining how A.I generated makes her feel as a listener).
[4] See Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023) (explaining how watermarks should be used in A.I generated content proliferated by the federal government).
[5] Search for A.I. Music Generator, Google, https://www.google.com/search?q=ai+music+generator&rlz=1C5CHFA_enUS962US965&oq=ai+music+&gs_lcrp=EgZjaHJvbWUqCggAEAAYsQMYgAQyCggAEAAYsQMYgAQyCggBEAAYsQMYgAQyCggCEAAYsQMYgAQyDQgDEAAYgwEYsQMYgAQyCggEEAAYsQMYgAQyDQgFEAAYgwEYsQMYgAQyBwgGEAAYgAQyBwgHEAAYgAQyBwgIEAAYjwIyBwgJEAAYjwLSAQkyMjMxajBqMTWoAgiwAgHxBVjjXqSlFBjq8QVY416kpRQY6g&sourceid=chrome&ie=UTF-8 (last visited March 12, 2026).
[6] See Murf.ai, https://murf.ai/ (last visited Mar. 6, 2026) voicedub.ai, https://voicedub.ai/create (last visited Mar. 6, 2026) (detailing how an audio-driven AI service works).
[7] Distribution, Audiosalad, https://audiosalad.com/services/#distribution (last visited March 12, 2026); Distribute Your Music Everywhere, CD Baby, https://cdbaby.com/music-distribution/?_gl=1*1cpcyba*_up*MQ..*_gs*MQ..&gclid=Cj0KCQjwl5jHBhDHARIsAB0YqjzAWkUfhPb-JQ8Vc80tt3SdsEgb0aJg3f9ToQt2bEBmlJUb8vHpK7EaAnN8EALw_wcB&gbraid=0AAAAAD3wZhQjscYkmtPB0tpDoyxveQWYY (last visited March 12, 2026).
[8] See SonoSuite, https://sonosuite.com/blog/top-5-dsps-for-music-discovery (last visited Mar. 6, 2026) (explaining the most used DSPS on the market).
[9] See Riskind (explaining how A.I. generated music harms artists).
[10] See President Donald Trump, Removing Barriers to American Leadership in A.I. §5 (Jan. 23, 2025) (revoking Exec. Order No. 14110).
[11] See DEEPFAKES Accountability Act, H.R. 5586, 118th Cong. (2023) (imposing limits on A.I generated material).
[12] See Regul. (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on A.I. and amending Regul. (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance), 2024 O.J. (L 1689) https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (categorizing different A.I. usages into risk categories).
[13] Id.
[14]Trump, Removing Barriers to American Leadership in A.I.
[15] Michael Kratsios, David Sacks & Marco Rubio, Winning the Race: America’s AI Action Plan, 3 (July 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[16] Id.
[17] Exec. Order No. 14,148, 90 Fed. Reg. 8237 (Jan. 20, 2025).
[18] See Exec. Order No. 14,110, 88 Fed. Reg. (explaining how watermarks should be used in A.I generated content proliferated by the federal government).
[19] Id. §4.5(a)(iii).
[20] See id. (explaining how watermarks should be used in A.I generated content proliferated by the federal government).
[21] DEEPFAKES Accountability Act, H.R. 5586, 118th Cong. (2023).
[22] See id (asking that A.I. generated content made by the federal government contain warnings that such content is A.I. generated).
[23] See id. (asking that A.I. generated content made by the federal government contain warnings that such content is A.I. generated).
[24] Id.
[25] See id. §1041 (e) (explaining that audio made with A.I. have verbal disclosures at the start of a track).
[26] See id. (describing how watermarks on audio clips would be implemented).
[27] See A.I. Act (enacting limits on A.I. usage).
[28] See id. (defining different risk categories for various usages of A.I.).
[29] See id. (explaining how deepfake A.I. usage represents relatively low risk when compared to other uses of A.I., such as biometric verifications).
[30] A.I. Act; Exec. Order No. 14,110 (Oct. 30, 2023)
[31] A.I. Act
[32]Id.
[33] See id. (detailing how A.I. services will be categorized and that knowledge accessed by the public).
[34] See id. (detailing how A.I. services will be categorized and that knowledge accessed by the public).
[35] See Trump, Removing Barriers to American Leadership in A.I. (revoking A.I. guidelines from his predecessor).
[36] Michael Kratsios, David Sacks, Marco Rubio, Winning the Race: America’s AI Action Plan (2025) at 3
[37] See A.I. Act (enumerating the European Union’s approach to A.I. generated material); See Exec. Order No. 14,110 (Oct. 30, 2023) (seeking to put limits on A.I. generated material that the federal government produces); See H.R. 5586 (2023) (imposing limits on A.I. generated material).