Edited several times, I should add: IANAL, but this sounds similar to meta releasing llama weights. I think that the spirit of the European law is to control concrete uses of AI and not a broad distribution of weights and architecture. So my question is: Does the EU AI act ban this distribution?, I think it provides more competition and options for Europeans.
Edited: Thinking a little more, installing open weights could allow backdoors (in the form of a way to manipulate intelligent agents via specials prompts designated to control the system), so perhaps from a national security point of view some care should be taken (but I personally hate that). So another question: Is there a way to control if open weights can create back doors (via prompt injection)?, I recall a paper in which prompt by symbols like 0?,#2! could put the system in a state in which one can read information that the LLM is asked to hide (that is a well known attack available to those that know the weights).
Another question: Is fine tuning or Lora a way to eliminate o amilliorate such prompt attacks?, is there any python library to defend against such attacks. Download - install - modify by fine tune or lora - now you are protected.
"AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union."
I don't really understand the limits of it's scope e.g. the difference between making a system available vs. controlling how it's used is not clear to me. I don't think you can escape the regulation of high-risk uses by offering a "general purpose" AI with no controls on how it's used.
In terms of the open-source nature - I can see it being treated like giving away any other regulated product e.g. medication, cars, safety equipment etc. The lack of cost won't transfer the liability from the supplier to the consumer.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52...