i have my own archive of my own bugs/"artworks" - https://twak.org/glitches-in-the-worlds-geometry-engine/
i have my own archive of my own bugs/"artworks" - https://twak.org/glitches-in-the-worlds-geometry-engine/
https://en.wikipedia.org/wiki/Turing_Institute
There was a previous Turing Institute in Glasgow doing AI research (meaning, back then rules-based systems, but IIRC my professor was doing some work with them on neural networks), which hit the end of the road in 1994. There was some interesting stuff spun out of there, but it's a whole different institute.
It's recently struggling for relevance.
https://www.ft.com/content/6bfea441-e16c-499a-a887-69f735c29... (https://archive.ph/ujfhb)
I hope they turn it around because the UK need for AI academic coordination/leadership is so high.
I haven't found a definition of consciousness which is quantifiable or stands up to serious rigour. If it can't be measured and isn't necessary for intelligence, perhaps there is no magic cut-off between the likes of Dall-E and human intelligence. Perhaps the Chinese-room is as conscious as a human (and a brick)?
With that being said, this is not an excuse for refusing to share paper code or making sure the experiments are reproducible.
In these situations, I have suggested releasing anonymous implementations after the paper is accepted just to get the code out there. I am not certain this is the right thing to do!
Frequently the PIs (bosses) will not even glance at the repositories written by junior members, probably can't read code anyway, and certainly won't allocate time for their maintenance. Even worse, most academics who do publish code have never been exposed to real world software engineers, their techniques, or tools.
At best Photoshop can play a role in covering tracks of evidence or artifacts of this much more sophisticated approach to faking identifies.
It’s not to say the developers and scientists at Adobe are lesser, it’s that it’s not the same tool or problem that’s being solved.
Put crudely Photoshop can let you draw a mustache on someone’s photo. This is about inventing a photo that never existed before.
adobe research scientists are crazy-strong in the area of deep/neural graphics[1]. Perhaps we should disentangle adobe research from photoshop?
In the UK, 90% of the cyclists are older men in the latest lycra racing gear on //edit//road bikes. In Brugges, people just get on a regular bike with a basket on the front wearing their normal clothes.
(Don't confuse "track bikes" and "road bikes"...)
Both often work with unclear requirements, and sometimes may face floating bugs which are hard to fix, but in most cases, SWE create software that is expected to always behave in a certain way. It is reproducible, can pass tests, and the tooling is more established.
MLE work with models that are stochastic in nature. The usual tests aren't about models producing a certain output - they are about metrics, that, for example, the models produce the correct output in 90% cases (evaluation). The tooling isn't as developed as for SWE - it changes more often.
So, for MLE, working with AI that isn't always reliable, is a norm. They are accustomed to thinking in terms of probabilities, distributions, and acceptable levels of error. Applying this mindset to a coding assistant that might produce incorrect or unexpected code feels more natural. They might evaluate it like a model: "It gets the code right 80% of the time, saving me effort, and I can catch the 20%."
Through a career SWEs start rigid and overly focused on the immediate problem and become flexible/error-tolerant[1] as they become system (mechanical or meat) managers. this maps to an observation that managers like AI solutions - because they compare favourably to the new hire - and because they have the context to make this observation.
[1] https://grugbrain.dev/#:~:text=grug%20note%20humourous%20gra...