In a recent public comment addressed to the United States Copyright Office, the Federal Trade Commission seemingly expanded upon remarks made at the National Advertising Division back in September that it will aggressively and proactively challenge alleged unfair practices involving artificial intelligence, even if that means stretching the meaning of “unfair” to increase its jurisdiction over such matters.
ChatGPT may be smart enough to pass the bar exam, but lawyers should take caution before relying on the Artificial Intelligence (“AI”) platform to conduct any legal business.
On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes and citations.
Last month, TikTok sued Montana’s attorney general—alleging the state’s recent TikTok ban is unconstitutional and is preempted by federal law.
On May 17, Montana Governor Greg Gianforte signed a first-of-its-kind law banning TikTok from operating in the state, in order “[t]o protect Montanans’ personal, private, and sensitive data and information from intelligence gathering by the Chinese Communist Party.”
In an unsigned per curiam opinion yesterday in Gonzalez v. Google, the U.S. Supreme Court vacated the Ninth Circuit’s judgment— which had held that plaintiffs’ complaint was barred by Section 230 of the Communications Decency Act – and remanded it. But the Court’s opinion entirely skirted a highly-anticipated issue: whether Section 230 does, in fact, shelter as much activity as courts have held to date.
The Supreme Court heard oral argument last week in cases that will have extensive implications for online platforms, and, more broadly, for internet speech across the board. Gonzalez v. Google, in particular, may result in a first-of-its-kind clarification of the scope of 47 U.S.C. § 230.
Last month, the FTC issued a report to Congress advising governments and companies to exercise “great caution” in using artificial intelligence (“AI”) to combat harmful online content. The report responds to Congress’s request to look into whether and how AI may be used to identify, remove, or otherwise address a wide variety of specified “online harms.” Among the “harms” covered by Congress’s request were impersonation scams, fake reviews and accounts, deepfakes, illegal drug sales, revenge pornography, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.
Fundamental to the due process of law is notice—a requirement that all parties are made aware that a lawsuit could alter their legal rights or duties. Most defendants will be served in person by a process server. But when the defendant is unreachable this way, some creativity may be required, especially when the defendants are only traceable through their actions on the blockchain, an instrument famous in part for its ability to keep its users private. After a hack of almost $8,000,000 of its funds, Liechtenstein-based cryptocurrency exchange LCX AG allegedly traced some of its stolen digital assets to different digital wallets. LCX AG was able to freeze the funds, but with no name stitched into the digital wallet, it still lacked a name and place to pursue legal action. At least, it lacked a physical place. But if LCX AG knew the location of the wallet, then perhaps it could serve the virtual place.
In the first two instalments of our series we examined the progress of English law to provide a secure and certain legal infrastructure for cryptoasset investment and management. In particular, we looked at how recent English case law has addressed the following questions:
(1) Are cryptoassets property and (2) Can…