Thumbnail: gravatar

AI Regulation

by on under general
3 minute read

AI Regulation

All of the advancements in generative AI has led to somewhat of a natural consequence: governments want to get involved. Yay. Of these, the US government is the one that is able to exert the most influence due to the concentration of tech giants within its sphere of influence.

Of note, Sam Altman at OpenAI seems to be in favour of this regulation. My feeling is that he is attempting be a participant in creating a regulatory moat for keeping new entrants out of the market by creating barriers for any would-be competitors.

The coziness between the largest AI companies and the US government feels very wrong to me in multiple ways:

  • influence and pressure flowing from government to companies
  • internal safeties and biases propagating forward within companies creates a feedback loop of priors that will diminish future probabilistic outcomes.
  • combining the above two items can create paternalistic outcomes that don’t serve citizens, but rather the interests of those in power

Americanterism and a failure to understand the internet

In hearing all of the talks of regulation from the US government, it really makes me wonder if anyone within it understands how AI models are trained, distributed or otherwise used. During that last sentence, I downloaded a model with 7 billion parameters that runs on my laptop. It’s mine now, forever.

I’m Canadian, and who is the US government to tell me how act or what to do with this technology? If I was to build something, would I have to submit to the US government to do business in the US? Will US-based companies flee to less restrictive locales? Will this stifle new entrants to the market, with eventual lessening of competition, and ultimately, innovation?

Chaotic actors

Did you know that information classified as “top secret” by the US government that’s leaked to the public creates a real problem for people that hold clearances? Even though the information is “public” they’ve never been cleared to view it and it can jeopardize their clearance renewal if they do so.

Let’s now apply a similar treatment to AI in the US, and any regulatory framework that companies must reside within. If a foreign entity(company, country, whatever…) trains or gives a way a vastly superior model that companies in the US are forbidden from using, what then? Do they get left behind? Do they leave the US? Does the government abandon their policies and tactics(I think we know that this is a non-starter)?

I can understand with the interest in being seen to be doing something, but at the same time, I really don’t know how this doesn’t promote incentives around being headquartered in an innovation-friendly country.

Information wants to be free

I’m not sure if this is the answer, but I feel that it’s at least something: as many open source models as possible. A ubiquity of technology that becomes anti-fragile to the whims of any one regulator and the ability to easily switch out models and carry on as you were. Don’t code to a company’s or model’s API, use an abstraction framework that allows flexibility. This allow for flexibility and the ability to protect your work, and in the case of paid products, prevent a hostage situation.

The pace of innovation is often at odds with regulation, and in the case of AI, the lack of consideration of second and third order consequences will have an impact of some magnitude. The current pandering to hysteria around AI is short-sighted and will lead to outcomes, some predictable, and some not, that could be avoided.

ai, regulation, americentrism
comments powered by Disqus