I'm HIGHLY sceptical. The academics will love it, because they get money. But look at that list of parties involved. More than twenty parties supplying people; none of them will have this initiative on the top of their list of loyalties and priorities.
Meaning, everyone will talk, noone will take charge, some millions change hands and we continue with business as usual.
Instead this should have been a single new non-profit or whatever with deep pockets that convinces smart people to give their 100% for a while.
Death by committee. And I say this as someone who was in a multi-million research program across ~8 universities, that was going to do "groundbreaking" research. After a few months everyone was back to pushing their own lines of research, there was almost zero collaboration let alone common language or goal setting.
I can see that you're unfamiliar with how EU grants and how these project collections work, but I don't have much time to address this with great detail.
As a person who's in this type of projects for a long time, what I can say is "it works", because people do not compete with each other, but will build it together.
What I can say is, if they have came this far, there's already plans about what to do, and how to do, and none of the parties are inexperienced in these kinds of things.
"It works!" is the only thing that will be visible on web page after hundreds of milions will be burned.
I’m observing few of such „unprecedented” cooperation projects from EU funds. A lot of meetings, a lot of managers, plenty of very unskilled people creating mess and few names doing presentations so companies will believe everybody know what are they doing.
Same from company side - they need being in those projects to comply with stupid EU rules about being eco.
As someone who has lived through Eurostar and Horizon 2020, and who has participated as both a researcher and corporate partner, I can say: it does not work.
Unless by work you mean "successfully passed the post-project review by non-experts based on a bunch of slides"
Point at a single project of this sort that had any tangible output that's still in use.
My experience from these projects is the opposite. The projects are always secondary priorities for participants, and the difficulty of coordinating some dozen entirely separate organisations towards something actually productive is immense. In practice each participant independently spends the money they get on something lightly relevant, and the occasional coordination meetings are spent on planning how to fulfill the reporting requirements of the grant.
Business and research are difficult enough even when done by tightly knit teams and constantly tested against real world systems and customer feedback. The idea that a hodgepodge of organisations can achieve poorly defined yet aspirational goals on a low budget is massively misguided.
> I can see that you're unfamiliar with how EU grants and how these project collections work, but I don't have much time to address this with great detail.
This is a take that can only come from someone who is dependent on Horizon, because I don't think any independent observer could look at Horizon projects and say they just work.
Having worked on an FP7 programme myself and having a family member involved in project audits, I’d say some skepticism is warranted—particularly regarding the incentives that attract private sector partners and the talent they actually allocate once funds are secured.
Funding is tied to employee qualifications and effectively subsidises salaries, which creates room for misalignment. No-shows of allocated employees were not uncommon, since a company willing to accept lower-quality deliverables can assign junior employees to do the work at a fraction of the cost, while the salary difference for their PhDs simply becomes added margin.
Can you tell me please what you worked on and where I can see the output? I’ve been adjacent to these kind of efforts and the only thing I can say is that I’m highly skeptical of your claims.
In the case of Quaero [1] "it did not work". Sure, all involved parties were praising the project and by constantly shifting goalposts they could label it a success, but in the end it was a huge waste of money, sucked in by the usual suspects.
While I do think EU grants are a good thing, I'm sceptic about these too-big-to-fail multi-national projects. I still remember the Human Brain Project.
I largely see this type of collaboration as a very inefficient form of a distributed company (team) where members of that team do not have other incentive but to (mostly) collect points on research papers. There is no incentive to actually build a product in such a setting and there is no incentive to remain competitive since you cannot be fired, or penalized in some other form. And generally speaking, as an individual you don't care about the industry (market) competition since you mostly care about remaining relevant within your very narrow scope of your research topic. So, this is why this doesn't work. There is no coherent mass toward the same goal. Seemingly there is but there isn't.
The problem is academic culture is corrupt, and it’s very hard to reverse the decay.
Simple example: one Russell Group UK university (like many others) was admitting students who couldn’t speak English. A lecturer on a technical subject found they were struggling to understand his course, in part due to the language barrier. Come the exam, most of the students failed. He was told to make the exam easier so they would pass. The lecturer involved is a well meaning kindly man who would consider himself very ethical. But he did what he was told and the students passed.
In such a system it’s hard to see how an individual can fix it. If he had protested, he’d have been gently moved aside and the exam would have been rewritten by someone else.
Research is similarly corrupt. Grants are written to match a call, and they promise the earth. Friends review them and score highly. Pals on the grant committee favour their friends. And it’s implicitly agreed that the outcomes don’t have to be achieved. You go back to doing your original research, or not doing much at all, or more likely figuring out how to get some papers published and writing more grant proposals.
The idealistic, actually interested in progressing the field, leave or are squeezed out, looked over for lectureships in favour of folks who bring in grants via bs and politics.
Choose a topic you know about. Go on the EPSRC website. Look at grants ten years ago and see what their promised outcomes were.
My only answer is that a project like this must be done by people hired from outside of academia, which at this point is probably corrupt beyond repair. I look back at previous generations and wonder how the hell so much advancement was achieved.
” The models will be developed within Europe's robust regulatory framework, ensuring alignment with European values while maintaining technological excellence.”
They may release something, but i doubt it will be more useful than what already exists.
> They may release something, but i doubt it will be more useful than what already exists.
I wouldn't put such prejudice in this thing. I'm not implying that you're wrong, but I'm highly skeptical that the model will be incompetent or inferior.
Also, don't forget. They'll open source it end to end. From data to training/testing code and everything in between.
As someone who worked in several Eurescom research projects back in the early 90s and watched it all get steamrolled by actual pragmatic work done in telcos and US manufacturers, I have zero faith in this even as a political/independence gesture.
There are loads of people who think "there is no moat and Europe can do this" (including the Portuguese government, which announced a Portuguese LLM at WebSummit--which, hilariously, is being trained on a research "supercomputer" in Spain), and they have no idea how far (politically, economically and pragmatically) Europe's tech scene is from the US. Other than Mistral, of course.
I'm involved with IMI-BIGPICTURE, a similarly sized EU initiative (~70M funding). It's not that bad. Things will take a while to start moving but as long as all the players stay on the same page shit will get done. 10x slower than with a small team but some things can't be done in small teams
> The project aims to create a repository of digital copies of around 3 million slides covering a range of disease areas. This repository will then be used to develop artificial intelligence tools that could aid in the analysis of slides.
€70 MM to get the digital copies of 3 million slides. Speaks for itself.
Can't talk specifics but I worked with a perpetually failing startup that spun out of a very prestigious university. The company was lined with way too many professors. Their burn rate must have been incredible, based on the huge investments they got. Their product was already "meh" before the AI boom made it utterly obsolete. They made huge promises but delivered poor results (in an area where 90% accuracy was basically useless). They never seemed to iterate on the product. Suddenly (like almost overnight) we got word that they were out of money and were likely to cease operating. At the 11th hour some idiot bailed them out, likely because of their academic credentials. (Certainly not because of their IP, product or output capability). Or maybe it was sunken cost fallacy. Idk.
Anyway, they're still failing along, burning through a seemingly infinite runway. Academia FTW!
As someone who is in general skeptical of programs like this (and an European) there are 2 remarkable / timely things about this:
- This project doesn't just allocate money to universities or one large company, but includes top research institutions as well as startups and GPU time on supercomputing clusters. The participants are very well connected (e.g. also supported by HF, Together and the likes with European roots)
- Deepseek has just shown that you probably can't beat the big labs with these resources, but you can stay sufficient close to the frontier to make a dent.
Europe needs to try this. Will this close the Gap to the US/China? Probably not. But it could be a catalyst for competitive Open source models and partially revitalize AI in Europe. let's see..
PS: on Twitter there was a screenshot yesterday that in a new EU draft, "accelerate" was used six times. Maybe times are changing a little bit.
Disclaimer: Our company is part of this project, so I might be biased.
I wish you the best of luck. However, this is basically a still just a European joint research project (admittedly compatibly well funded) with similar partners that have been also connected before in other research projects. To really compete in the space it will require new ideas, great talent and good leadership towards a common goal. I have myself been part of many EU funded projects and know the difficulty of realizing this within such a project. Public funding sadly has adversarial effects sometimes.
As for computing cost: as EuroHPC gives resources to research for free there can be more budget for computing. The EuroHPC joint undertaking has just decided to invest hundreds of millions of Euro in new AI clusters and supporting services. So this can come on top. Actually projects like this are much needed to also make good use of the money.
Disclaimer: my lab is involved in one of the new AI Factories.
So, if one has a well thought-through idea, what is the process of getting the resources ($$$) from OpenEuroLLM and the compute from EuroHPC? How do I become a partner as a long-standing engineer with plenty of industry practice in research and development?
I am asking this because I never really understood how EU funds are working, they always seemed to me as there's a lot of gate keeping.
The problem is that:
- These are not really super computing cluster in LLM terms. Leonardo is a 250 PFlops cluster. That is really not much at all.
- If people in charge of this project actually believe R1 costs $5.5M to build from scratch, it's already over.
I think no one believes that R1 costs $5.5m from scratch.
People in this project (most, not all) are very aware of the realities in training and are very well connected in the US as well. Besides Leonardo there are JUWELS, LUMI & other which can be used for ablations and so on.
This will never compete with what the frontier labs have (+ are building) but might be just enough for something, that is close enough to be a useful alternative :).
The only thing that matters is how much regulatory red tape is involved.
My guess is that the paperwork will kill this. Read the announcement. Too much discussion about regulatory framework. In the US or China, all you need is some money and smart people. That’s a very low barrier to getting moving forward.
In other words, to be successful you need to be able to break the law and lobby the government? That is indeed the USA mindset, or should I say United Corporations of America? I'm happy EU is not USA.
I agree that the announcement should´ve talked more about goals and performance than regulatory stuff ;-).
But I think there is a new understanding among the bureaucracy that regulation (alone, without innovation) will kill Europe´s competitiveness and that some acceleration and cutting of red tape is necessary.
Can't say with certainty that this will be successful.
But that we, as a very young startup that is barely known outside of our AI Open Source niche, are part of this, is already a sign in itself - a year ago I´d have never believed that this might be an option (and also probably would've declined if someone asked us to join a EU-funded project).
We will have engineers without a degree (but hundreds of thousands of HF downloads) working side-by-side with some of the top researchers + HPC centers.
What I don't understand is the big plan. Say, you manage bring about something that works in the lab on par with DeepSeek R1. What happens with it next? In the market LLMs are being improved continuously based on feedback - in terms of usage data etc. and new versions are being released multiple times a year. If we want to stay sovereign, we need a similar engine started in Europe, but I can't see how a research project relying on a walled garden system of supercomputer centres can start it.
might be debatable - but I tend to agree with Dario Amodei on this; my guess is that R1 is 7-10 months behind the internal frontier at the big labs, while having a few small novel tricks.
(But i might err, will be interesting to see the development going forward)
They allocated €37.4 million [1]. As an European, I truly don’t understand why they keep ignoring that the money required for such projects is at least an order of magnitude more.
Deepseek's release has shown that there's no great risk in getting left behind. All the info is out there, people with skills are readily available, creating a model that will match whatever current model is considered frontier level is not that hard for an entity like the EU.
For everyone here shouting that the EU needs to do something, be a leader, what have they lost so far by choosing to lead in legislation instead of development?
They've lost nothing. They've gained a lot.
They can use the same frontier level open source model as everyone else, and meanwhile, they can stay on top of harmful uses like social or credit scoring.
Also speaking as a European, legislation is kind of the point of a government in the first place. I do think the EU goes too far in many cases. But I haven't seen anything that makes me think they're dealing with this particular hype train badly so far. Play the safe long game, let everyone else spend all the money, see what works, focus on legislation of potentially dangerous technology.
> legislation is kind of the point of a government in the first place
I would personally consider legislation to be but one means to an end, with the point of a (democratic) government actually being to ensure stability and prosperity for its citizens.
In that framework, "leading with legislation" doesn't make any sense—you can lead with results, but the legislation is not itself a result! Lead with development or lead with standard of living or lead with civil rights, but don't lead with legislation.
Your formulation sounds like politician's logic: "something must be done, this is something, therefore we must do it". Legislation as an end in itself. Very interesting.
> They can use the same frontier level open source model as everyone else, and meanwhile, they can stay on top of harmful uses like social or credit scoring.
We are dependent on models created by USA and Chinese companies for access to the technology that seems to be the next internet - while the entire world is accelerating hard towards protectionism and tariff wars.
I partially agree with you. The only problem is that these markets are highly monopolistic, and we will be creating another technological dependency on the US.
Deepseek didn't show anything except the compute cost of final model. We don't know how much data collection costed, how much unethical data like copyrighted data or OpenAI's data is needed, the cost of experiments etc.
> Creating a model that will match whatever current model is considered frontier level is not that hard for an entity like the EU.
If they have this as their top priority and allotted few billion dollars then sure. Not in the current form where the people involved are only involved for publication, not doing hard engineering things that takes months or years and they could do the same thing in OpenAI or Deepseek for like $1 million salary which both of them pay.
Personally I'm rather happy that the allocation was not too large at first, even that is quite a sizeable sum. The EU is great at kickstarting projects that sound like a panacea, but end up not leading to anything. Once they have something to show, by all means, throw more money at them.
The trap that these EU projects typically fall into is that they burn all of the grant funding on paying politically connected consultants to write reports. No one gets around to building an MVP.
As said before in another comment. The project can likely make use of 'free' EuroHPC resources, which will also be funded simultaneously with hundreds of millions. Still not Stargate, but if they can actually innovate something beyond the obvious (like R1) I think the money is still useful.
On what basis are you are stating this? I'm asking because I have been involved in another project like these (15M budget) and the main issue was the lack of computing resources allocation, because no one thought about it (true story).
DeepSeek had plenty of R&D expertise which were not included in the (declared) model training cost. Here we are talking about building something nearly from scratch, even if there is an open source starting point you still need the infrastructure, expertise and people to make it work, which with that budget are going to be hard to secure. Moreover these projects take months and months to get approved, meaning that this one was conceived long before DeepSeek, thus highlighting the original disalignment between the goal and the budget. DeepSeek might have changed the scenario (I hope so) but it would be just a lucky ex-post event… not a conscious choice behind that budget.
DeepSeek probably spent closer to two billion on hardware. And then there’s the energy cost of numerous runs, staff costs, all of that. The 5.5m cost was basically misleading info, maybe used strategically to create doubt in the US tech industry or for DeepSeek’s parent hedge fund to make money off shorts.
I mean, I get that the current strategy by most participants seems to be burning billions on models which are almost immediately obsoleted, but it's... unclear whether this is a _good_ strategy. _Especially_ after deepseek has just shown that there _are_ approaches other than just "throw infinite GPUs at it".
Like, insofar as any of this is useful, working on, say, more techniques for reducing cost feels a lot more valuable than cranking out yet another frontier model which will be superseded within months.
As someone who lives here, I'd actually be surprised if we even got that. I expect lots of taxpayer funded websites, manifestos, PowerPoints and numerous discussions and ultimately nothing.
That'd be very very good actually. I'd be happy if institutions would use that where one could TECHNICALLY (maybe just a miniscule amount of people would do that) verify data from end to end, instead of some "open" model that is actually not open at all. A little worse performance is a good trade-off imo
What's with this American mentality that everything needs to always be the best, and if it isn't, it should't even exist? I know USA is alright with breaking the law, invading people's privacy and lobbying its government to the point where it's really the corporations that elect politicians into power, but why do you also need Europe to be the same way? I thought us Europeans have made it pretty clear we don't like your way of governing, so stop forcing it on us. I'd much rather use a less capable LLM if it meant that the LLM isn't driven on top of mountains of illegally collected data.
The actual top EU AI labs like Mistral, Black Forest Labs, or Stability AI are nowhere to be seen. Same goes for potent, established companies like SAP, Schwarz Group and the like. They likely made the right move here as this is doomed to fail, as correctly elaborated by the top comments.
It's all fun and games until AI models decide your type of people (blonde/brown/from that zip code/with that type of last name/went to that school/worked there in the past/have those facial features) are "bad" or "untrustworthy" or don't deserve healthcare or to be hired for that job or get a mortgage.
"AI" bias has existed for as long as we have had "AI" in its various forms. Remember ML algorithms classifying black people as monkeys? And the "solution" was to make them unable to find monkeys or primates. That one got big because of the implication.. when it's "people with the last name Smith being dumb", nobody will care
The alternative is businesses are not held to account. I'd much rather have a cookie pop-up and GDPR notices than businesses have no guard rails against moves that are not in the interest of the user/customer.
> The models will be developed within Europe's robust regulatory framework
I'm sure that all AI research needs is "robust regulation".
As a European, it annoys me to no end that Brussels bureaucrats think they know and understand everything and they can regulate everything, the only thing they are achieving is making sure that AI companies will avoid forming in the EU, because nobody wants to be at a disadvantage compared to the rest of the world, sure eventually they will provide service to the EU countries, but we will never have our own industry.
The EU needs to stop having pencil pushers make decisions on things they have no clue about and somehow get people who know what they are talking about to make the choices.
Meaning, everyone will talk, noone will take charge, some millions change hands and we continue with business as usual.
Instead this should have been a single new non-profit or whatever with deep pockets that convinces smart people to give their 100% for a while.
Death by committee. And I say this as someone who was in a multi-million research program across ~8 universities, that was going to do "groundbreaking" research. After a few months everyone was back to pushing their own lines of research, there was almost zero collaboration let alone common language or goal setting.
As a person who's in this type of projects for a long time, what I can say is "it works", because people do not compete with each other, but will build it together.
What I can say is, if they have came this far, there's already plans about what to do, and how to do, and none of the parties are inexperienced in these kinds of things.
Ball of mud.
Unless by work you mean "successfully passed the post-project review by non-experts based on a bunch of slides"
Point at a single project of this sort that had any tangible output that's still in use.
Business and research are difficult enough even when done by tightly knit teams and constantly tested against real world systems and customer feedback. The idea that a hodgepodge of organisations can achieve poorly defined yet aspirational goals on a low budget is massively misguided.
This is a take that can only come from someone who is dependent on Horizon, because I don't think any independent observer could look at Horizon projects and say they just work.
Funding is tied to employee qualifications and effectively subsidises salaries, which creates room for misalignment. No-shows of allocated employees were not uncommon, since a company willing to accept lower-quality deliverables can assign junior employees to do the work at a fraction of the cost, while the salary difference for their PhDs simply becomes added margin.
[1] https://en.wikipedia.org/wiki/Quaero
Remember the EU Search Engine project, Quaero, and its equally failed successor, Theseus? No? I thought so.
I'll believe it works after they finally have one success
Deleted Comment
The problem is academic culture is corrupt, and it’s very hard to reverse the decay.
Simple example: one Russell Group UK university (like many others) was admitting students who couldn’t speak English. A lecturer on a technical subject found they were struggling to understand his course, in part due to the language barrier. Come the exam, most of the students failed. He was told to make the exam easier so they would pass. The lecturer involved is a well meaning kindly man who would consider himself very ethical. But he did what he was told and the students passed.
In such a system it’s hard to see how an individual can fix it. If he had protested, he’d have been gently moved aside and the exam would have been rewritten by someone else.
Research is similarly corrupt. Grants are written to match a call, and they promise the earth. Friends review them and score highly. Pals on the grant committee favour their friends. And it’s implicitly agreed that the outcomes don’t have to be achieved. You go back to doing your original research, or not doing much at all, or more likely figuring out how to get some papers published and writing more grant proposals.
The idealistic, actually interested in progressing the field, leave or are squeezed out, looked over for lectureships in favour of folks who bring in grants via bs and politics.
Choose a topic you know about. Go on the EPSRC website. Look at grants ten years ago and see what their promised outcomes were.
My only answer is that a project like this must be done by people hired from outside of academia, which at this point is probably corrupt beyond repair. I look back at previous generations and wonder how the hell so much advancement was achieved.
They may release something, but i doubt it will be more useful than what already exists.
I wouldn't put such prejudice in this thing. I'm not implying that you're wrong, but I'm highly skeptical that the model will be incompetent or inferior.
Also, don't forget. They'll open source it end to end. From data to training/testing code and everything in between.
There are loads of people who think "there is no moat and Europe can do this" (including the Portuguese government, which announced a Portuguese LLM at WebSummit--which, hilariously, is being trained on a research "supercomputer" in Spain), and they have no idea how far (politically, economically and pragmatically) Europe's tech scene is from the US. Other than Mistral, of course.
This is how the EU works. It's the reason the EU has very little innovation compared to the USA.
Nice way to frame this.
€70 MM to get the digital copies of 3 million slides. Speaks for itself.
Anyway, they're still failing along, burning through a seemingly infinite runway. Academia FTW!
- This project doesn't just allocate money to universities or one large company, but includes top research institutions as well as startups and GPU time on supercomputing clusters. The participants are very well connected (e.g. also supported by HF, Together and the likes with European roots) - Deepseek has just shown that you probably can't beat the big labs with these resources, but you can stay sufficient close to the frontier to make a dent.
Europe needs to try this. Will this close the Gap to the US/China? Probably not. But it could be a catalyst for competitive Open source models and partially revitalize AI in Europe. let's see..
PS: on Twitter there was a screenshot yesterday that in a new EU draft, "accelerate" was used six times. Maybe times are changing a little bit.
Disclaimer: Our company is part of this project, so I might be biased.
As for computing cost: as EuroHPC gives resources to research for free there can be more budget for computing. The EuroHPC joint undertaking has just decided to invest hundreds of millions of Euro in new AI clusters and supporting services. So this can come on top. Actually projects like this are much needed to also make good use of the money.
Disclaimer: my lab is involved in one of the new AI Factories.
I am asking this because I never really understood how EU funds are working, they always seemed to me as there's a lot of gate keeping.
This will never compete with what the frontier labs have (+ are building) but might be just enough for something, that is close enough to be a useful alternative :).
PS: Huge fan of Latent Space :)
wdym?
The goals don’t matter.
The people don’t matter.
The only thing that matters is how much regulatory red tape is involved.
My guess is that the paperwork will kill this. Read the announcement. Too much discussion about regulatory framework. In the US or China, all you need is some money and smart people. That’s a very low barrier to getting moving forward.
But I think there is a new understanding among the bureaucracy that regulation (alone, without innovation) will kill Europe´s competitiveness and that some acceleration and cutting of red tape is necessary.
Can't say with certainty that this will be successful. But that we, as a very young startup that is barely known outside of our AI Open Source niche, are part of this, is already a sign in itself - a year ago I´d have never believed that this might be an option (and also probably would've declined if someone asked us to join a EU-funded project).
We will have engineers without a degree (but hundreds of thousands of HF downloads) working side-by-side with some of the top researchers + HPC centers.
No way
is that a new take? cause so far deepseek was considered as proof for small companies being able to compete with big players like openai ...
[1] https://digital-strategy.ec.europa.eu/en/news/pioneering-ai-...
For everyone here shouting that the EU needs to do something, be a leader, what have they lost so far by choosing to lead in legislation instead of development?
They've lost nothing. They've gained a lot.
They can use the same frontier level open source model as everyone else, and meanwhile, they can stay on top of harmful uses like social or credit scoring.
Also speaking as a European, legislation is kind of the point of a government in the first place. I do think the EU goes too far in many cases. But I haven't seen anything that makes me think they're dealing with this particular hype train badly so far. Play the safe long game, let everyone else spend all the money, see what works, focus on legislation of potentially dangerous technology.
I would personally consider legislation to be but one means to an end, with the point of a (democratic) government actually being to ensure stability and prosperity for its citizens.
In that framework, "leading with legislation" doesn't make any sense—you can lead with results, but the legislation is not itself a result! Lead with development or lead with standard of living or lead with civil rights, but don't lead with legislation.
Your formulation sounds like politician's logic: "something must be done, this is something, therefore we must do it". Legislation as an end in itself. Very interesting.
https://www.youtube.com/watch?v=vidzkYnaf6Y
We are dependent on models created by USA and Chinese companies for access to the technology that seems to be the next internet - while the entire world is accelerating hard towards protectionism and tariff wars.
What could possibly go wrong
> Creating a model that will match whatever current model is considered frontier level is not that hard for an entity like the EU.
If they have this as their top priority and allotted few billion dollars then sure. Not in the current form where the people involved are only involved for publication, not doing hard engineering things that takes months or years and they could do the same thing in OpenAI or Deepseek for like $1 million salary which both of them pay.
> legislation is kind of the point of a government
As an American, most of this post reads like doublespeak satire. I guess it's not, but just to put a transatlantic pov here.
I'll add a sports metaphor for good measure: in order to become expert football players, we'll get tickets to watch the best teams play.
Private sector often does not fund projects like these as they have bad return on investment.
They seem to have enough to send overseas and to spend on illegal economic migrants.
>Private sector often does not fund projects like these as they have bad return on investment.
Then why does the private sector in the US fund projects like these?
I have zero doubt that nothing else will come out of this.
Source: have been working with major UN and international bodies on the software side.
https://semianalysis.com/2025/01/31/deepseek-debates/
Like, insofar as any of this is useful, working on, say, more techniques for reducing cost feels a lot more valuable than cranking out yet another frontier model which will be superseded within months.
As someone who lives here, I'd actually be surprised if we even got that. I expect lots of taxpayer funded websites, manifestos, PowerPoints and numerous discussions and ultimately nothing.
the eu gets some publicity
and the public gets nothing but another bite out of their taxes
Also can't wait to get bombarded with cookie popup, ai bias popup, then ai accuracy popup etc.
It's all fun and games until AI models decide your type of people (blonde/brown/from that zip code/with that type of last name/went to that school/worked there in the past/have those facial features) are "bad" or "untrustworthy" or don't deserve healthcare or to be hired for that job or get a mortgage.
"AI" bias has existed for as long as we have had "AI" in its various forms. Remember ML algorithms classifying black people as monkeys? And the "solution" was to make them unable to find monkeys or primates. That one got big because of the implication.. when it's "people with the last name Smith being dumb", nobody will care
I'm sure that all AI research needs is "robust regulation".
As a European, it annoys me to no end that Brussels bureaucrats think they know and understand everything and they can regulate everything, the only thing they are achieving is making sure that AI companies will avoid forming in the EU, because nobody wants to be at a disadvantage compared to the rest of the world, sure eventually they will provide service to the EU countries, but we will never have our own industry.
The EU needs to stop having pencil pushers make decisions on things they have no clue about and somehow get people who know what they are talking about to make the choices.