The more interesting result comes when you combine AI in a jet with :
() Not needing 2-5 years of expensive training.
() not needing space for the human and it's associated safety systems
(*) being completely willing to sacrifice the aircraft to "win" an encounter or otherwise achieve it's goals.
In other words, an air force does not need AI that can completely dominate human pilots.
It simply needs AI that is "mostly similar in performance", and when combined the advantages mentioned above, means that such aircraft will be significantly better in terms of fighting capability.
Of course, that is just raw combat, not other areas of flight where greater human intelligence is of more value.
I think that days of big expensive aircraft like modern fighters / bombers are numbered. They will likely be replaced with a swarm of drones, which are smaller, cheaper and harder to detect. For $100M (the price of just one fighter) you can field 10000 cheap drones and literally blot out the sun over the field of battle.
Complemented, and AI-piloted, but not replaced. There are significant positive scale factors in aerodynamics. Larger jets have much better thrust to weight ratios, allowing them to move faster, travel further, maneuver better, and carry heavier and longer ranged weapons and sensor platforms.
Light drones and big expensive aircraft will both exist in the future and play separate roles. They also won't interact much, aside from a fighter group spearheading a drone group. A drone swarm is too diffuse to be a worthwhile target for a fighter's weapons, but a fighter is too fast to be engaged by a drone swarm.
Air battles of the future will likely be conceptually similar to air battles today, but with light drones replacing bombers with guided bombs. An initial wave of fighters will contest air superiority. If they achieve it they will use it to launch long-range attacks to disable a small number of anti-drone hard points. Once the way is clear a wave of smaller, slower drones can swarm in and bomb a large number of targets with precision.
I suspect there will be a fairly long lasting middle step: an AI piloted "mini jet fighter". Say 4x smaller, but faster and more maneuverable by virtue of not having to protect a human pilot, and still capable of carrying existing air to air and surface to air missiles. It will sort of be an auto-piloted aircraft carrier for the missiles, which at some level are the "real aircraft" these days already. The missiles need a larger, fast and hard to shoot down vehicle to carry them into the region of combat.
I'm a bit skeptical. For one, how much of a modern aircraft is actually really dedicated to cockpit space, maybe 5% on a small fighter? How much extra fuel and thrust is really required to lug along a ~80 kg human (okay maybe double to account for supporting systems)? Keeping in mind a single Sidewinder missile weighs 85 kg.
For a drone (even a swarm of them) to compete with a modern jet fighter, it'll need a reasonably powerful engine and sufficient fuel to keep it up in the air long enough to: seek out and engage the enemy, carry a decent payload, and reach a suitable altitude. That all sounds like a lot of fuel and engine power to me, especially if you're expecting this drone (swarm) to operate outside of the immediate vicinity of the launching area.
I see more potential for drones to act as 'screen' for jet fighters once stealth technology is made irrelevant by advances in radar. I foresee the use of, low cost vehicles that resemble fighters (in the ways that count), but carry minimal ordinance (if any) to keep the costs down. They could also provide auxiliary functionality like electronic warfare and scouting.
I mean you could in theory develop a drone that exploded on impact and operated within a limited area, but you've just re-invented the guided missile, with a little extra smarts.
> I think that days of big expensive aircraft like modern fighters / bombers are numbered. They will likely be replaced with a swarm of drones, which are smaller, cheaper and harder to detect. For $100M (the price of just one fighter) you can field 10000 cheap drones and literally blot out the sun over the field of battle.
Maybe that's true of regional militaries.
The US wants to be able to project power - often quite far from bases. The drones you're talking about just don't have much of a range.
I'm not saying they won't be used - I'm sure they will be. But there will also be much larger and more expensive drones as well.
It doesn't stop there. You train one AI, you've trained them all. A minor performance increase instantly upgrades the capability of your entire fleet. An AI that can learn from real world situations will likewise convey that learning to your entire fleet instantly.
They say smart people learn from the mistakes of others, but we often have some trouble applying this. An AI should do better.
Of course, the corollary is that bugs will affect your entire fleet. I wonder if a future cyber-security role will be to look for AI blindspots to exploit en masse without prior warning (like flying with the sun behind you, but more arcane).
The absolute worst case scenario would be somehow tricking the AI or taking control from it so jets start attacking/kamikaze-ing their own side en masse. I suppose this is theoretically already a risk for some UAVs and perhaps even some advanced human-piloted jets, but a hijacked autonomous fleet would be a wild sight to see.
No doubt some organizations are researching how to perform and defend against things like that. The future of asymmetric warfare in particular could get really crazy.
AI doesn't learn in it's production form. That's training. You'll get the same deterministic response pout of every fighter, even if there is So.e level of sensitive dependence on initial conditions.
Human pilots do mot suffer this problem. A human can arbitrarily change things up for better or worse. Going the AI route is basically going the T-34 route. Suddenly quantity (via reproducibility) takes on a quality all it's own. My issue with that philosophically is does going down that road actually solve any pressing problem?
Which to me it rings a bit hollow and empty unless there's so.e other task we'd prefer people that would be pilots be doing.
AI can also fly past black/red out thresholds. Even in current aircraft that would be add capability. But having that in the design phase may open up all sorts of possibilities.
Probably not as much as you would think. Modern airframes are still not capable of sustaining as much acceleration as missiles. You just can't outmaneuver today's missiles.
High-G tolerance is not a dogfighting panacea. Even without the pilot, there are issues when you want to take a large aircraft into 10+g range. Structural strength is one thing but there are also aerodynamic limits. The higher the G the higher the stall speed. Superhuman or AI, trying to pull 10g in thin air will result in flow separation (ie stall) and departure from controlled flight... ie you lose control and drop like a rock.
Missiles don't care about stalling. They are on a suicide mission to get as close to the target as possible for the nanoseconds necessary to detonate. Missiles, being small and dense, are much stronger than any aircraft.
I'm sure someone in these programs has speculated what air combat would look like between AIs where body-based thresholds are removed and machine learning has developed new tactics that are only applicable in a pure AI arena. I'd be really interested in seeing those speculations.
pilots don't really take up as much space on these planes as you'd think. these planes are huge. the F22 is over sixty feet long. if you replaced the pilot, controls and screens, weight wise you might get to carry one more AAMRAM, but that sucker is twelve feet long.
These are large aircraft (though the F-16 is significantly smaller than an F-22) but the pilot + ejection seat + airtight pressurized compartment + life support systems + displays and controls really starts to add up in both the mass and volume budgets.
But the pilot requires more design compromises. The requirement to have a large canopy protruding from the fuselage for maximum visibility is a challenge to both building a stealthy aircraft and aerodynamics.
Also, currently the pilot and their circulatory system keeps aircraft from turning at more than ~9G (and then only for brief periods of time). Without a pilot, it would be possible to build aircraft with stunning maneuverability.
The airframe can handle more Gs than the pilot can though, so if you don't have a pilot then in theory it opens up maneuvers that were not previously performable.
In the context of dogfights, why do you even need an aircraft at that point? A swarm of small, cheap, explosive drones with a kamikaze AI sounds like the end of human-piloted warplanes to me.
I don't think dogfights are the future of air-to-air combat.
> A swarm of small, cheap, explosive drones with a kamikaze AI sounds like the end of human-piloted warplanes to me.
This already exists. We just call them surface to air missiles or air-to-air missiles.
Modern air superiority combat is point-and-click. Send your air superiority fighter over, shoot a missile from beyond the sensor range of the other craft, and then blow them up.
----------
Air support fighters need guns (see A10 warthog) and other ammunition. But will we really be using guns in the future of air to air combat?
You'll want a two-stage system of some sort anyhow. There's a fundamental tradeoff between the ability to carry the fuel needed to gain height, maneuver into the correct position/heading/speed, loiter, etc, and to have the extreme power-to-weight ratio and maneuverability for a suicidal final attack run.
A plane that is flying high enough and can travel fast enough is essentially immune to ground-based interception. Your launch has to spend a significant amount of time and energy just to get high enough, and launches like this are extremely loud to the defending plane's sensors. If you launch too early they can just turn around and fly out of range; too late, and they can overfly and outrun your range.
Essentially, the position + velocity interception state space for smaller kamikaze designs does not necessarily include ground level & stationary for a large portion of the design space. And once you start up-scaling things to get the endurance necessary to attempt intercepts, then you lose the maneuverability and cost advantage for making kamikaze attacks, and attaching parasitic missiles or drone aircraft for the final attack run makes a ton of sense.
Please show me a swarm of small, cheap, explosive drones that can't be avoided by using radar and outdistanced/outmaneuvered. To do that you need medium to large and expensive drones.
Say you're the president of Elbonia, and you get a report that an unidentified aircraft is in your airspace.
Without a human pilot in a fast jet, how do you get up there to take a closer look at the situation to decide what to do?
Modern warfare is limited in nature, with rules of engagement, etc.
Iran just shot down an airliner because they had poor ROE and control of their systems.
Just because something looks like a threat on sensors does not mean it is a threat. It could be that natural phenomena are masking things. It could be a ruse, such as an adversary is disguising a civilian aircraft to look military, or vice versa.
This result with dogfighting is interesting, but is only a very small part of what fast jets are useful for.
Also, higher maximum G limit. AIs should be inherently much more maneuverable that human pilots because humans can only pull -1 to 9 Gs, AI would only be limited by the aircraft itself.
This is of marginal value. Pulling Gs might help you in a close-in dogfight, but when your primary threat is a radar-guided air-air missile at 75mi+ or a SAM site at 200mi+ of range, Gs don't really matter unless you've failed multiple other possible means of defeating the incoming missile. And even in this case, high Gs aren't going to be much more than a hail mary.
I think the issue is going to be that the AI can pull G-forces higher than what a human can do and also never having to be re-trained.
This means a swarm of "cheap" craft that focus on deploy 20 to take anything out is going to be the optimal. And without the need for all those human systems (glass cockpit, AR consoles, ejection systems, seats, etc) we're good.
For non-combat operations they can be flown similarly to drones.
I'm not sure they wouldn't require retraining. If all the pilots are AIs, the tactics would change, necessitating retraining. Then there would be the search for better AI tactics. You'd still be in a position of a tactics race, driven by simulation and machine learning probably, so the evolutions would iterate faster, and the deciding factor would become the computing power each air force could throw at it.
With perhaps the exception of the training, wouldn't most of those problems (and some other ones mentioned in this thread) be something that can be solved using remote controlled, but still controlled by humans, fighter jets?
There must be a reason that I'm missing why this isn't already a thing, anyone got an idea?
gamers worry about the latency of a wireless controller sending a signal from their couch to their TV.
for a fighter jet with a remote pilot, you're looking potentially hundreds of miles of range, and the latency of the video transmission to the pilot combined with the latency of the controls signal back to the plane. i can't imagine that being acceptable for any sort of dogfighting scenario.
Reliable comms is far from guaranteed in a war, and if there's a gen. 4 fighter dogfight involved, it presumably a median sophisticated adversary capable of EW.
Also opening up the ol' remote desktop port on an F16 is a major cyber risk.
AI scales better to how many jets you want to use. AI improvements can also instantaneously be rolled out to every jet. There is no need for continues live communication in "hot" combat. Less video leaks when they kill a bunch of civilians again.
Also think about the export market! Countries that now can buy the ruthless-pilot with the plane! And like with so many exports, you can give them a shittier version of the AI, so they do not become a danger to your forces!
Also it isn't an either/or type situation. You can have both. The Air Force is working on "Loyal Wingman" where you can have a squadron with a manned system helping strategically target and direct their "unmanned wingmen" ai based planes.
The biggest advantage I bet (in a real AI fighter) is removing the person from the cockpit. Modern jets have been able to fly harder than humans can reliably stand for a while, being able to pull higher Gs means tighter turns.
The real question is how much does gun dogfighting ability matter now because guns are almost vestigial on modern fighters. Dogfights now are missile affairs and this test was a gun based one.
AI can run the show locally, with a remote operator making the actionable decisions. Military drones already operate this way, without much onboard smarts.
Not something I'm excited about, but likely inevitable.
I think what we're actually going to see is something like assistant drones that go with the pilot. So using the f35 as a command&control system to give orders to the ai drones. And/or just have them backthem up in combat anyway.
The issue with AI is that combat can change so quickly. It's hard to make the right calls.
Let the AI handle Air to Air defense while the human pilots take out what targets they can read by their discretion etc.
This is explicitly part of the plan for the f35. It also frees up the drones from having to carry sensor equipment, and lets all of the drones benefit from a single sensor suite too heavy for the drones themselves to carry.
I'm almost excited to blow another hundred billion dollars on designing a next-gen aircraft without any G-Force considerations. Current jets designed around a human pilot can already achieve mind-boggling performance of over 10G on multiple axes, what would a blank slate aircraft with none of those limitations be able to do?
> Of course, that is just raw combat, not other areas of flight where greater human intelligence is of more value.
You don‘t need the AI to replace every plane or pilot, you could have a traditional plane and pilot to do all the human things and an AI or two to protect the human or engage the targets. Mixed AI/human squadrons.
Would you even need a good AI or equipment to be sufficiently disruptive? I imagine a swarm of crudely moving, low cost drones would win most encounters with more complicated targets. This feels like a boon to smaller nations who can quickly gain an edge by capitalizing on existing research.
Or mixed in with human pilots as a force multiplier. I think that is how they will start being used, so that there is always a human observer. At the start.
this is a tough one, to make an exploitable attack you would need at a minimum to have a copy of the flight AI and be able to simulate possible encounters. Given the relatively finite number of side-channels available, it's probably far cheaper for the AI builder to run such simulations and adversarially test against the main exploitable variations such as blasting the aircraft's sensors with bad/confusing data.
There just aren't that many methods of interacting with a plane >1km away, and even fewer at the 50-100KM ranges that modern fighters are built to fight at.
I’m not sure how that business insider piece is applicable. Fighter jets have a huge range of available telemetry, not just visual spectrum cameras. High intensity LEDs might fool a camera but they won’t fool radar.
I don’t think the problem is fundamentally different from the pre-AI state of electronic warfare and countermeasures.
The fifth generation of fighter aircraft marked a significant departure from the idea of a fighter jet being anything like what we think of as a fighter jet.
They're now designed and employed as fighter 'platforms'. Instead of zipping around the battlefield guns and missiles ablaze in 1-on-1 combat, they're low observable long-range systems designed to understand and disrupt the battlefield by employing their electronic and long-range missile systems.
These AI are a natural extension of that. I imagine the 6th generation of fighter platforms will be commanding swarms of fighter drones to do the fighter part of their role.
The US Air Force is already looking at this. I believe the plan wouldn't be to tie it to a specific "generation" of fighter, and will allow fifth, and maybe even fourth generation fighters to lead a "swarm" of fighters.
The British Navy were already looking at fully automated dronecraft carriers back in 2015-16 too. Using smaller drones as armaments instead of missiles and the like.
"The overarching ACE concept is aimed at allowing the pilot to shift “from single platform operator to mission commander” in charge not just of flying their own aircraft but managing teams of drones slaved to their fighter jet."
This is likely to cause several interesting problems in my mind. For instance you may be about to sacrifice a lot of current talent to actually "fight" these planes, since you're now mainly focusing on the ability to coordinate and micromanage the battlefield instead of being physically/mentally able to assume the physiological role as primary control unit of the aircraft.
Also, to be honest, these AI controlled fighter platforms scare the shit out of me because there is now potentially fewer human decision points in the system.
Like it or not, you can end up with many times the destructive power in the air orchestrated by 1 guy without having the requisite sanity check of "Excuse the hell out of me, sir, but you want me to bomb WHAT?"
The capability to look at a situation and decide to call off is a feature of warfare that I think is frankly underappreciated.
We may very well be working unintentionally toward creating a world where a small
Holy hell, it'd be nice if I finished my thought. We're creating a world where fewer and fewer people are capable of marshaling more and more destructive power.
This does not bode well in terms of the law of large numbers being able to temper the extreme characters that setup may invite.
But why would that human commander have to be in the air, putting his life on the line more than when they would be further away and on the ground/in a ship?
The only answers I can think of are a) that having human eyes in the sky still has advantages, or b) that long range communication is too unreliable (either as is, or because of possible enemy interference).
You may need an AWACS to direct the battle, but would it have to have human in board?
But is either true? If so, what’s the reason? Is there a c) “we can’t tell the air force yet that pilots who actually fly are a thing from the past”?
> But why would that human commander have to be in the air, putting his life on the line more than when they would be further away and on the ground/in a ship?
Boyd wasn't just a pilot but also a strategist. One of his biggest adherent is the USMC. I don't imagine him rejecting this outright. I would love to know what he would have thought of AI because the potential speed with which AI can digest observations and make a decision could be an order of magnitude greater than a human. Boyd's EM theory was heavily influenced by analysis he and his collaborators did using early computers. I can imagine him being awed by AI and embracing it.
I'd imagine one of the (forget the name, the big planes with radar that buzz around the battle space), could handle a swarm of drones from a long distance. Not sure that it'd make sense to have a speedy jet out there doing that.
AWACS is the role, E-3 Sentry is the current plane. It's a 707 full of radar, but it's also a giant flying target. If you're going to link a bunch of drones to one platform, the drone controller is going to need a lot more survivability.
The problem is going to be keeping an AWACS close enough to command and control the fighters while not risking getting it shot down. An F-22 or F-35 (or even an F-15) is going to be a lot more survivable than an AWACS.
Or maybe the E-3 Sentry aka AWACS, but the JSTARs matches the "big plane" better and has a giant synthetic aperature radar for imaging large swaths of the battlespace. We deployed with JSTARs when I went to Iraq as a Shadow 200 TUAV Pilot myself.
... in a standardized test that has nothing do to with real enemy encounters - the complete title.
We've seen this before, even for a lot more limited and controlled environment like the game Dota the "AI" can beat the humans a few times before the humans learns to exploit it's many weaknesses.
Some program that self learned a (admittedly impressive) number of reactions based on seeing/playing a huge number of simulations is not intelligence so not AI. It cannot reason on the spot and will fall for the most ridiculous of traps. For example the AI that beat the professional Dota players fell for running in circles around a tower forever while getting slowly damaged to death. Even the most simple of mammals (which we do not consider intelligent) would react to the pain at some point and bolt.
My theory is that AI will not exist until we reach AGI. Because with specialized AI you can always fall outside it's area of "expertise" and behave like a stupid bot.
> Some program that self learned a (admittedly impressive) number of reactions based on seeing/playing a huge number of simulations is not intelligence so not AI.
Is this really that different than what humans do?
After watching dozens of AlphaStar commentary videos [1] over the last few months, I was more or less thinking the same thing: the AI has basically evolved a massive ruleset: "Do X. If you see Y, do Z."
Nonetheless, I decided to start playing again myself. Poking around, I saw a recommendation to go through one guy's sort of "training course" [2], and guess what? A lot of it comes down to the same kind of thing. "Send your first overlord to scout their natural. If they haven't expanded to their natural, build one -- ONE -- spine crawler in your natural."
How much of our "intelligence" is really anything more than pattern matching + search? And during the actual dogfight, how much of what the human pilot was doing was anything more than simply pattern matching from their own vast experience racked up in a simulator?
> A lot of it comes down to the same kind of thing. "Send your first overlord to scout their natural. If they haven't expanded to their natural, build one -- ONE -- spine crawler in your natural."
I think your general question is still a good one, but it's worth noting that from a human perspective, Starcraft matches only start off this way. Very quickly, the game state becomes complex enough that decision trees break down and intuition becomes the driving process for high-level human players.
The extent to which AI can begin to compete with this sort of intuitive human processing is most interesting to me. As it relates to this article, I think it matters a great deal if the experiment has constrained the system to the degree that it ceases to operate in that intuitive realm that high-level Starcraft matches operate in.
Clearly it can surpass peak human skill at StarCraft, but it’s important to understand what Deep Mind is doing differently.
One example is memory. Deep mind doesn’t update it’s strategy when playing the same player repeatedly or even over the same game. It can operate at near peak human performance indefinitely, but it avoids being exploited via deep understanding of the exact rules in play. It was also playing on the ladder under random names to avoid people developing specific counters.
> Is this really that different than what humans do?
Yes, we have built in mechanism that react to situations we never encountered before. You will never be able to simply sit calmly while something is damaging you for example.
Imagination plays a significant role for us humans. We also are capable of maintaining incomplete conceptual mental models. Some of us experience epiphanies. Etc.
Pattern matching and search are brute force approximation of actual “intelligence”.
We are quite different as we can evaluate new situations based on our understanding of the principles behind something like a build order in Starcraft.
The “one” spine crawler thing is a bit overstated in the context of an actual game. It’s not like it’s 100% necessary to build it, build it in your natural, or only build one depending on what you’ve scouted, the opponent’s race, and who you’re playing against.
Like the sibling said, the game begins like a decision tree but quickly falls outside that purview.
> "Some program that self learned a (admittedly impressive) number of reactions based on seeing/playing a huge number of simulations is not intelligence so not AI."
Seems like with every breakthrough the goalpost of AI gets moved.
I believe a researcher coined a phrase for this but I can't remember what it was.
It doesn't get moved so much as we discover that a certain mathematical manipulation plus all our previous mathematical manipulations don't amount to an AI.
It also seems like with every breakthrough this complaint is raised without addressing the underlying issues with the new AIs shortcomings.
My hot take: true AI is so far out of reach of our ability it's not even funny, which makes the whole field either a search for the fountain of youth at worst, or at best a search for tools to inform humans or to replace humans in rote tasks. See
https://youtu.be/orMtwOz6Db0
Ummm, no? AI means a clear thing: artificial _intelligence_. Depth-first search is not intelligence. Auto-generated algorithms that succeed on most of the training data is not intelligence. And that is obvious from the errors "AI" makes that a human would never do.
I think there's value in having a huge learning set, but there are also limitations, as you're alluding to.
But I also don't think we're miles away from computers learning some sort of reasoning structure. Some sort of causality-type thinking where you have hierarchies which at some level are reasonably simple, because humans can only fit so much. But when the computers figure it out, they don't have that problem.
At the moment, yes, there's a huge corpus of patterns and you can make some smart decisions just by being able to learn from the huge library, but it's the difference between knowing that one move tends to beat another, without knowing the why. For instance in sports, you have man-marking vs zone-marking. The naive thing to do is just tabulate how often a team did one or the other vs how often they won. Then break it down even more by who they were facing and various stats. But if you don't have a theory of marking, you're a bit lost for explanation, even if the tabulation clearly says zone marking tends to win. A causal explanation might sound something like "man marking allows the other team to pull you out of shape and gives them the choice of which players face which". It might also tell you that sometimes it's actually smarter to man-mark, eg when there's some player you really feel is dominant and needs to be taken out.
I gather that people are working on this causality type AI though, so no doubt we'll see something interesting soon.
That's now how wars work. You get time to learn and adapt to the enemy strategy. One plane down doesn't mean anything if you learn how to exploit the enemy's entire fleet of "AI" with 1 casualty.
This is why people train. Once AI piloted planes are prevalent on the battlefield, there will be a huge effort to capture enemy systems and develop tactics against them. The USA has been doing it for decades (https://en.m.wikipedia.org/wiki/Have_Doughnut), I’m sure that China and Russia have as well.
> It cannot reason on the spot and will fall for the most ridiculous of traps.
> My theory is that AI will not exist until we reach AGI.
You are confusing the definition of AI for AGI so of course you think that. AI doesn’t need to have true understanding to be considered AI, it just has to have the appearance of intelligence.
Why do people talk about swarms of drones but never of swarms of fighter jets? That's incredibly weird terminology considering drones are never deployed in this manner.
Yes, only thing is... usually it does not work that way in reality, and when they breech enemy defense drones, they will continue with a quick genocide.
>We've seen this before, even for a lot more limited and controlled environment like the game Dota the "AI" can beat the humans a few times before the humans learns to exploit it's many weaknesses.
I recall that there was AI beating top Dota players. I'm not familiar with humans figuring out the AI and exploiting weaknesses. Every search I've done just shows articles about the AI winning. By chance do you know where I can read up on humans figuring out the weaknesses of the AI?
> The general strategy is to win by claiming first tower. At 0:00, you aggro the enemy creep wave so that they start following you. Then you walk around in a circle around the jungle, and the enemy wave will start to form a congo line that will follow you around. You then path around the jungle so that on the next wave spawn, you can aggro the wave again and continue to walk around in circles. The AI will burn glyph when your creep wave hits the tower, and for some reason it can't really decide between chasing you or defending the tower. So after about 5 minutes of doing this, your creep waves will eventually destroy the tower and you win the 1v1.
>Dota the "AI" can beat the humans a few times before the humans learns to exploit it's many weaknesses
Lets retry this experiment but the loser of any games get shot in the head, and the next player only gets basic telemetry while also shitting themselves. A human Dota player with a gun pointed at him will likely perform differently too.
Note that the AI player doesn't get killed if it loses, because it is software. It just doesn't get full telemetry and diagnostics.
Yes, they are all specialised AIs but we make them so fast nowadays that it doesn't matter they don't generalise yet, we can have as many as we want and they can be really useful in many fields. ImageNet can be trained in 30 seconds from scratch, and fine tuning takes just a few extra epochs. We don't need to wait for AGI, most low hanging fruit have not been picked yet.
Without a clear definition of AI that everyone agrees on, we will never reach AI. If an AI is considered intelligent only if it is "General", then are we, as human, even intelligent? I would argue strongly that we are missing the 'G'.
One of the major weaknesses of fighter aircraft is the pilot. Not their skills or abilities: the requirement for a soft, squishy human that deforms under G forces to pilot the thing is a disadvantage. The person in the cockpit is the most breakable part of the system. Everything else on the jet is capable of withstanding far more material strain.
Just being able to replace a human-driven fighter jet with one that is piloted by an AI - even a comparatively dumb one - would be an advantage for fighters. The AI driven jet would be more maneuverable straight out of the gate.
It's also by far the limiting resource on aircraft production. It takes 18 years to grow a human to the point where they can pilot a jet, and then a few years more to train them. Many years more to train them really well. In any sort of war of attrition, you'll run out of trained pilots long before you run out of planes.
The Japanese weren't defeated in the Pacific because they ran out of planes, they were defeated because they ran out of pilots. At the time of the Marianas Turkey Shoot, Japan still had a large carrier force (9) and a large number of aircraft (750), but their pilots were all green, and so weren't very effective in combat. The Battle of Santa Cruz Islands is considered a strategic victory for the U.S. even though it was a huge tactical defeat, because it depleted the stock of trained pilots enough that Japan wasn't able to mount effective resistance for the rest of the war.
The Battle of Santa Cruz Islands is considered a strategic victory for the U.S. even though it was a huge tactical defeat, because it depleted the stock of trained pilots enough that Japan wasn't able to mount effective resistance for the rest of the war.
Didn't know this. How sad. The more I learn about WWII and its abject brutality, the more I marvel at how much US culture fetishizes it.
Related: this season of Revisionist History has a series about Curtis LeMay and the history of napalm that is equal parts fascinating and sickening.
The Japanese had excellent pilots, but they did not invest in the infrastructure of making more pilots. Every excellent pilot flew until he was killed. The Americans pulled top pilots back to train the next generation of pilots, creating a massive talent gap over time.
The lesson is one that applies to startups as well as to aviators: you have to invest in your people, not just in maximizing value delivered today.
There was another there major difference between the Battle of Santa Cruz Islands and the Marianas Turkey shoot. The US went from flying the vastly inferior Wildcat to the mostly superior Hellcat. Japan wasn't able to produce a more modern fighter than the Zero in significant numbers for the rest of the war and by 1944 it was obsolete.
Many years ago there was an AI game for PC called Creatures, using neural nets for the Norns, the characters you would raise in the game by training them. The devs who wrote it were originally writing AI software for fighter pilots, and continually won simulated engagements because the AIs had no body-based limitations, which opened up a range of maneuvers that would literally kill a human.
The best example of such is a hard dive: if you're flying level, and dive too quickly, the G forces drive all the blood into your brain, potentially causing a hemhorrage. The maneuver human pilots use for a hard dive includes a half barrel roll just to avoid this, and it's still less effective than suddenly pointing down. The AIs used hard dives effectively to shed human pursuers.
This led to the insight that bodies (in some sense) were a missing component of any human-like AI, so they made Creatures to test the theory. The neural nets had virtual bodies with hunger, fatique, and pleasure/pain receptors; the player interacted with a god-like hand cursor that could pet them or spank them. For the 90s, it worked pretty well on Windows 98.
Interesting to note however that in the actual competition [1], the AIs didn't seem to find benefit from vastly exceeding the performance limits of humans, with regard to G-forces. Maybe these limits come through in the design of the jet as a whole, and would be a different story in a built-to-purpose UAS. The winning AIs here mostly benefited from super-human precision in aiming.
Maybe these limits come through in the design of the jet as a whole, and would be a different story in a built-to-purpose UAS
That's what I'd suspect. A jet that's purpose built to be flown by a robot would likely be much more acrobatic, as you're designing for the engineering limits of the materials, rather than a human.
Regardless of the real answer, it's not making me feel like climbing into a jet cockpit and picking a fight with an AI.
Humans also have a limit on what they will do. A military with more AI can engage in actions that a group of soldiers would refuse to do. As more of the military becomes AI driven, it effectively concentrates more power behind those who operate the AI.
Alternatively, humans will also do things that the military doesn't want them to do: like loot, rape, and kill non-combats. Not every war is WWII where military leaders actively directs troops to commit atrocities. More often than not, crimes are committed by troops against the wishes of their superiors.
On the bright side, horses are now spared much of the direct experience of modern warfare, so maybe humans will eventually follow.
On the flip side, if the oligarchy-with-social-policies was a knock-on effect of mass conscription during the napoleonic wars, a military with more AI suggests an oligarchy without social policies.
Latency between the jet and the pilot is an insurmountable problem. You either have a very laggy connection to a human pilot who is far away, or a wireless link to a more local pilot who is vulnerable due to being in a combat zone. Plus, it creates an EM signal that could be disrupted as a tactical weakness.
The latency, though, would be the primary killer. It's why the Air Force still needs to send drone pilots to Afghanistan - you can have a person piloting the drone during the mission out of a container in Nevada, but they don't have enough reaction time to safely and reliably land the thing in Afghanistan. Control has to be handed off to a local pilot who has a lower-latency control link to the aircraft.
I could tell that the human pilot had to reorient himself after encounters, while the AI never needed to waste time grasping the situation and was able to take advantage immediately, every time.
In the same vein, as a former fighter pilot was commenting, the advantage goes to the pilot who can keep the plane right at the very edge of its performance envelope--which an AI can do more precisely than a human pilot can.
We know from John Boyd that getting into the opponent’s OODA (observe-orient-decide-act) loop is what wins these fights. If the AI has perfect state information, then it’s no wonder that it beat the human pilot who has to go through all four stages.
how about this: the scary part is when the individual nodes are rigged to be able to operate autonomously, and are able to elect new leaders or form consensus with peers and select new mission objectives as required by the situation if the prior command hierarchy is destroyed or otherwise uncontactable for an extended period
Having human pilots is going to play out the same way cavalry charges did in WW1. Old out of touch generals who romanticize how things where back in the day sending airmen to death needlessly.
The inability to change and adapt to new technologies plagues the US airforce and navy. Resulting in both of them being completely miss-equipped for what any war that doesn't involve fighting impoverished herdspeople would look like today. Only a fraction of both's budget is spent on systems that are actually good at anything other than showboating and being profit centers for mil-contractors. The majority of their budgets are captured by outdated manmetal (surface ships/pilot'd aircraft) most of which would be expended very quickly if a conflict ever did break out.
God forbid we ever have another conventional war but if we do you can look forward to having a couple dozen hypersonic glide vehicles (costing pennies on the dollar) sinking entire carrier groups in minutes. Swarms of drones just flying past over-priced f-35's, letting them expend all their munitions (which probably cost more than the drones), and still making it to hit their intended targets.
But the US navy's still got some kinda ok submarines so they got that going for them, the airforce.... idk I guess they can keep talking about "stealth" (fancy paint) and how its effective at avoiding detection from the underfunded military's of 3rd world dictators.
Militarized AI is nothing to celebrate. It should be banned or at least controlled with international agreements. 200K people died due to the atomic bomb, but AI weapons will not cause such apparent destruction. They will continue to be improved and integrated into the military and our culture. Because they will never be so scary to cause an outcry from the public or the experts, and politicians will not feel obliged to control them seriously.
Until it becomes completely accepted that flying robots kill human beings every other day... oh but wait. Already the case, never mind. It is okay I guess, as long as you are born in the country equipped with it.
I had an interview with Improbable in London, the CEO of their "Defense" research arm is a complete nutjob without any care for ethics or forward-thinking into the risk of what he is building. And that's a tech startup, I can't even imagine what's the mindset in the R&D labs of the old-school defence industry. Probably ridiculously entitled and self-congratulatory.
You not knowing in any level of detail is intentional. At best, the US military might have that under lock and key. These are more or less the most secretive projects a military has. If specs like "we sort of suck at x distance" were released, you give away a massive edge. Also, there is most definitely false info like that going around somewhere, so even if you find something, give that a lot of skepticism. Things like this are as much a war of intelligence as it is combat.
I remember a couple months ago, a video explaining the physics of a submarine being blown up showed up here. That was the first time I learned that a strategy for submarine warfare was to target an explosion under the center of the enemy sub, causing the water to rush up and creating a pressure differential that would crack the sub. Does that mean that is how sub warfare is done now? Or if that's the standard attack vector? Or that sub warfare would even be done with torpedoes now? No. The general public, and probably people not actively in the military with top secret clearance, only get bits and pieces of modern warfare strategy.
Edit: When I said the military might have those specs under lock and key, I meant my understanding is that the military is extremely compartmentalized and, aside from the highest ranking generals, there is almost no data aggregation on the details of what is capable.
() Not needing 2-5 years of expensive training. () not needing space for the human and it's associated safety systems (*) being completely willing to sacrifice the aircraft to "win" an encounter or otherwise achieve it's goals.
In other words, an air force does not need AI that can completely dominate human pilots. It simply needs AI that is "mostly similar in performance", and when combined the advantages mentioned above, means that such aircraft will be significantly better in terms of fighting capability.
Of course, that is just raw combat, not other areas of flight where greater human intelligence is of more value.
Light drones and big expensive aircraft will both exist in the future and play separate roles. They also won't interact much, aside from a fighter group spearheading a drone group. A drone swarm is too diffuse to be a worthwhile target for a fighter's weapons, but a fighter is too fast to be engaged by a drone swarm.
Air battles of the future will likely be conceptually similar to air battles today, but with light drones replacing bombers with guided bombs. An initial wave of fighters will contest air superiority. If they achieve it they will use it to launch long-range attacks to disable a small number of anti-drone hard points. Once the way is clear a wave of smaller, slower drones can swarm in and bomb a large number of targets with precision.
For a drone (even a swarm of them) to compete with a modern jet fighter, it'll need a reasonably powerful engine and sufficient fuel to keep it up in the air long enough to: seek out and engage the enemy, carry a decent payload, and reach a suitable altitude. That all sounds like a lot of fuel and engine power to me, especially if you're expecting this drone (swarm) to operate outside of the immediate vicinity of the launching area.
I see more potential for drones to act as 'screen' for jet fighters once stealth technology is made irrelevant by advances in radar. I foresee the use of, low cost vehicles that resemble fighters (in the ways that count), but carry minimal ordinance (if any) to keep the costs down. They could also provide auxiliary functionality like electronic warfare and scouting.
I mean you could in theory develop a drone that exploded on impact and operated within a limited area, but you've just re-invented the guided missile, with a little extra smarts.
Maybe that's true of regional militaries.
The US wants to be able to project power - often quite far from bases. The drones you're talking about just don't have much of a range.
I'm not saying they won't be used - I'm sure they will be. But there will also be much larger and more expensive drones as well.
It doesn't stop there. You train one AI, you've trained them all. A minor performance increase instantly upgrades the capability of your entire fleet. An AI that can learn from real world situations will likewise convey that learning to your entire fleet instantly.
They say smart people learn from the mistakes of others, but we often have some trouble applying this. An AI should do better.
Of course, the corollary is that bugs will affect your entire fleet. I wonder if a future cyber-security role will be to look for AI blindspots to exploit en masse without prior warning (like flying with the sun behind you, but more arcane).
No doubt some organizations are researching how to perform and defend against things like that. The future of asymmetric warfare in particular could get really crazy.
Human pilots do mot suffer this problem. A human can arbitrarily change things up for better or worse. Going the AI route is basically going the T-34 route. Suddenly quantity (via reproducibility) takes on a quality all it's own. My issue with that philosophically is does going down that road actually solve any pressing problem?
Which to me it rings a bit hollow and empty unless there's so.e other task we'd prefer people that would be pilots be doing.
Missiles don't care about stalling. They are on a suicide mission to get as close to the target as possible for the nanoseconds necessary to detonate. Missiles, being small and dense, are much stronger than any aircraft.
But the pilot requires more design compromises. The requirement to have a large canopy protruding from the fuselage for maximum visibility is a challenge to both building a stealthy aircraft and aerodynamics.
Also, currently the pilot and their circulatory system keeps aircraft from turning at more than ~9G (and then only for brief periods of time). Without a pilot, it would be possible to build aircraft with stunning maneuverability.
> A swarm of small, cheap, explosive drones with a kamikaze AI sounds like the end of human-piloted warplanes to me.
This already exists. We just call them surface to air missiles or air-to-air missiles.
Modern air superiority combat is point-and-click. Send your air superiority fighter over, shoot a missile from beyond the sensor range of the other craft, and then blow them up.
----------
Air support fighters need guns (see A10 warthog) and other ammunition. But will we really be using guns in the future of air to air combat?
A plane that is flying high enough and can travel fast enough is essentially immune to ground-based interception. Your launch has to spend a significant amount of time and energy just to get high enough, and launches like this are extremely loud to the defending plane's sensors. If you launch too early they can just turn around and fly out of range; too late, and they can overfly and outrun your range.
Essentially, the position + velocity interception state space for smaller kamikaze designs does not necessarily include ground level & stationary for a large portion of the design space. And once you start up-scaling things to get the endurance necessary to attempt intercepts, then you lose the maneuverability and cost advantage for making kamikaze attacks, and attaching parasitic missiles or drone aircraft for the final attack run makes a ton of sense.
Without a human pilot in a fast jet, how do you get up there to take a closer look at the situation to decide what to do?
Modern warfare is limited in nature, with rules of engagement, etc.
Iran just shot down an airliner because they had poor ROE and control of their systems.
Just because something looks like a threat on sensors does not mean it is a threat. It could be that natural phenomena are masking things. It could be a ruse, such as an adversary is disguising a civilian aircraft to look military, or vice versa.
This result with dogfighting is interesting, but is only a very small part of what fast jets are useful for.
So did the USS Vincennes. https://en.wikipedia.org/wiki/Iran_Air_Flight_655
This means a swarm of "cheap" craft that focus on deploy 20 to take anything out is going to be the optimal. And without the need for all those human systems (glass cockpit, AR consoles, ejection systems, seats, etc) we're good.
For non-combat operations they can be flown similarly to drones.
There must be a reason that I'm missing why this isn't already a thing, anyone got an idea?
for a fighter jet with a remote pilot, you're looking potentially hundreds of miles of range, and the latency of the video transmission to the pilot combined with the latency of the controls signal back to the plane. i can't imagine that being acceptable for any sort of dogfighting scenario.
Also opening up the ol' remote desktop port on an F16 is a major cyber risk.
Also think about the export market! Countries that now can buy the ruthless-pilot with the plane! And like with so many exports, you can give them a shittier version of the AI, so they do not become a danger to your forces!
The real question is how much does gun dogfighting ability matter now because guns are almost vestigial on modern fighters. Dogfights now are missile affairs and this test was a gun based one.
https://www.businessinsider.com/f35-pilot-f-35-can-excel-dog...
Not something I'm excited about, but likely inevitable.
The issue with AI is that combat can change so quickly. It's hard to make the right calls.
Let the AI handle Air to Air defense while the human pilots take out what targets they can read by their discretion etc.
You don‘t need the AI to replace every plane or pilot, you could have a traditional plane and pilot to do all the human things and an AI or two to protect the human or engage the targets. Mixed AI/human squadrons.
Deleted Comment
and the enemy doesn't need to beat the AI in dogfight, just the infinite side channel attacks on the training set.
WW3 might be won by adding a few high intensity LEDs to some old MIGs.
https://www.businessinsider.com/clothes-accessories-that-out...
There just aren't that many methods of interacting with a plane >1km away, and even fewer at the 50-100KM ranges that modern fighters are built to fight at.
I don’t think the problem is fundamentally different from the pre-AI state of electronic warfare and countermeasures.
Dead Comment
They're now designed and employed as fighter 'platforms'. Instead of zipping around the battlefield guns and missiles ablaze in 1-on-1 combat, they're low observable long-range systems designed to understand and disrupt the battlefield by employing their electronic and long-range missile systems.
These AI are a natural extension of that. I imagine the 6th generation of fighter platforms will be commanding swarms of fighter drones to do the fighter part of their role.
https://afresearchlab.com/technology/vanguards/successstorie...
"The overarching ACE concept is aimed at allowing the pilot to shift “from single platform operator to mission commander” in charge not just of flying their own aircraft but managing teams of drones slaved to their fighter jet."
https://www.boeing.com/features/2020/05/boeing-rolls-out-fir...
Also, to be honest, these AI controlled fighter platforms scare the shit out of me because there is now potentially fewer human decision points in the system.
Like it or not, you can end up with many times the destructive power in the air orchestrated by 1 guy without having the requisite sanity check of "Excuse the hell out of me, sir, but you want me to bomb WHAT?"
The capability to look at a situation and decide to call off is a feature of warfare that I think is frankly underappreciated.
We may very well be working unintentionally toward creating a world where a small
This does not bode well in terms of the law of large numbers being able to temper the extreme characters that setup may invite.
The only answers I can think of are a) that having human eyes in the sky still has advantages, or b) that long range communication is too unreliable (either as is, or because of possible enemy interference).
You may need an AWACS to direct the battle, but would it have to have human in board?
But is either true? If so, what’s the reason? Is there a c) “we can’t tell the air force yet that pilots who actually fly are a thing from the past”?
Latency + Bandwidth.
Or maybe the E-3 Sentry aka AWACS, but the JSTARs matches the "big plane" better and has a giant synthetic aperature radar for imaging large swaths of the battlespace. We deployed with JSTARs when I went to Iraq as a Shadow 200 TUAV Pilot myself.
We've seen this before, even for a lot more limited and controlled environment like the game Dota the "AI" can beat the humans a few times before the humans learns to exploit it's many weaknesses.
Some program that self learned a (admittedly impressive) number of reactions based on seeing/playing a huge number of simulations is not intelligence so not AI. It cannot reason on the spot and will fall for the most ridiculous of traps. For example the AI that beat the professional Dota players fell for running in circles around a tower forever while getting slowly damaged to death. Even the most simple of mammals (which we do not consider intelligent) would react to the pain at some point and bolt.
My theory is that AI will not exist until we reach AGI. Because with specialized AI you can always fall outside it's area of "expertise" and behave like a stupid bot.
Is this really that different than what humans do?
After watching dozens of AlphaStar commentary videos [1] over the last few months, I was more or less thinking the same thing: the AI has basically evolved a massive ruleset: "Do X. If you see Y, do Z."
Nonetheless, I decided to start playing again myself. Poking around, I saw a recommendation to go through one guy's sort of "training course" [2], and guess what? A lot of it comes down to the same kind of thing. "Send your first overlord to scout their natural. If they haven't expanded to their natural, build one -- ONE -- spine crawler in your natural."
How much of our "intelligence" is really anything more than pattern matching + search? And during the actual dogfight, how much of what the human pilot was doing was anything more than simply pattern matching from their own vast experience racked up in a simulator?
[1] https://www.youtube.com/playlist?list=PLVRQoOk_ltE3Fr1ofRE0Y...
[2] https://www.youtube.com/playlist?list=PLFeZeom2b4Dlt63qmkPO8...
I think your general question is still a good one, but it's worth noting that from a human perspective, Starcraft matches only start off this way. Very quickly, the game state becomes complex enough that decision trees break down and intuition becomes the driving process for high-level human players.
The extent to which AI can begin to compete with this sort of intuitive human processing is most interesting to me. As it relates to this article, I think it matters a great deal if the experiment has constrained the system to the degree that it ceases to operate in that intuitive realm that high-level Starcraft matches operate in.
One example is memory. Deep mind doesn’t update it’s strategy when playing the same player repeatedly or even over the same game. It can operate at near peak human performance indefinitely, but it avoids being exploited via deep understanding of the exact rules in play. It was also playing on the ladder under random names to avoid people developing specific counters.
Yes, we have built in mechanism that react to situations we never encountered before. You will never be able to simply sit calmly while something is damaging you for example.
Pattern matching and search are brute force approximation of actual “intelligence”.
Like the sibling said, the game begins like a decision tree but quickly falls outside that purview.
Seems like with every breakthrough the goalpost of AI gets moved.
I believe a researcher coined a phrase for this but I can't remember what it was.
It also seems like with every breakthrough this complaint is raised without addressing the underlying issues with the new AIs shortcomings.
My hot take: true AI is so far out of reach of our ability it's not even funny, which makes the whole field either a search for the fountain of youth at worst, or at best a search for tools to inform humans or to replace humans in rote tasks. See https://youtu.be/orMtwOz6Db0
For example, would you fool even a child with this? https://cdn.mos.cms.futurecdn.net/s4DuKgTLnS4cngTyiqVkNC-970...
There's an article on Wikipedia called "AI effect", I don't know if that's what you had in mind.
https://en.wikipedia.org /wiki/AI_effect
But I also don't think we're miles away from computers learning some sort of reasoning structure. Some sort of causality-type thinking where you have hierarchies which at some level are reasonably simple, because humans can only fit so much. But when the computers figure it out, they don't have that problem.
At the moment, yes, there's a huge corpus of patterns and you can make some smart decisions just by being able to learn from the huge library, but it's the difference between knowing that one move tends to beat another, without knowing the why. For instance in sports, you have man-marking vs zone-marking. The naive thing to do is just tabulate how often a team did one or the other vs how often they won. Then break it down even more by who they were facing and various stats. But if you don't have a theory of marking, you're a bit lost for explanation, even if the tabulation clearly says zone marking tends to win. A causal explanation might sound something like "man marking allows the other team to pull you out of shape and gives them the choice of which players face which". It might also tell you that sometimes it's actually smarter to man-mark, eg when there's some player you really feel is dominant and needs to be taken out.
I gather that people are working on this causality type AI though, so no doubt we'll see something interesting soon.
You only get ONE chance to beat AI in real life dogfight. So you better make sure that your strategy to exploit AI is going to work.
The USAF missiles will make guns obsolete doctrine of the 1960’s was premature not completely wrong.
> My theory is that AI will not exist until we reach AGI.
You are confusing the definition of AI for AGI so of course you think that. AI doesn’t need to have true understanding to be considered AI, it just has to have the appearance of intelligence.
The best algorithm will win the war.
People stop bullying your nerds at school if you don't want to loose your next war ;)
Unlikely. You will rather a swarm of drones against an enemy that has no such technology.
A country that can have a swarm of drones would very likely also have nuclear weapons and deterrence is a thing.
Unless the swarms are easily exploitable like the current "AI"s.
I recall that there was AI beating top Dota players. I'm not familiar with humans figuring out the AI and exploiting weaknesses. Every search I've done just shows articles about the AI winning. By chance do you know where I can read up on humans figuring out the weaknesses of the AI?
> The general strategy is to win by claiming first tower. At 0:00, you aggro the enemy creep wave so that they start following you. Then you walk around in a circle around the jungle, and the enemy wave will start to form a congo line that will follow you around. You then path around the jungle so that on the next wave spawn, you can aggro the wave again and continue to walk around in circles. The AI will burn glyph when your creep wave hits the tower, and for some reason it can't really decide between chasing you or defending the tower. So after about 5 minutes of doing this, your creep waves will eventually destroy the tower and you win the 1v1.
You can also check the ml subreddit for discussions of exploits:
https://www.reddit.com/r/MachineLearning/comments/bfq8v9/d_o...
https://www.reddit.com/r/MachineLearning/comments/6t58ks/n_o...
https://www.reddit.com/r/MachineLearning/comments/bcumrs/d_o...
https://www.reddit.com/r/MachineLearning/comments/6u304t/n_m...
I also recall a post from one of the pros talking about his strategies to try and beat them but I can't find it.
Lets retry this experiment but the loser of any games get shot in the head, and the next player only gets basic telemetry while also shitting themselves. A human Dota player with a gun pointed at him will likely perform differently too.
Note that the AI player doesn't get killed if it loses, because it is software. It just doesn't get full telemetry and diagnostics.
Dead Comment
Just being able to replace a human-driven fighter jet with one that is piloted by an AI - even a comparatively dumb one - would be an advantage for fighters. The AI driven jet would be more maneuverable straight out of the gate.
The Japanese weren't defeated in the Pacific because they ran out of planes, they were defeated because they ran out of pilots. At the time of the Marianas Turkey Shoot, Japan still had a large carrier force (9) and a large number of aircraft (750), but their pilots were all green, and so weren't very effective in combat. The Battle of Santa Cruz Islands is considered a strategic victory for the U.S. even though it was a huge tactical defeat, because it depleted the stock of trained pilots enough that Japan wasn't able to mount effective resistance for the rest of the war.
Didn't know this. How sad. The more I learn about WWII and its abject brutality, the more I marvel at how much US culture fetishizes it.
Related: this season of Revisionist History has a series about Curtis LeMay and the history of napalm that is equal parts fascinating and sickening.
The best example of such is a hard dive: if you're flying level, and dive too quickly, the G forces drive all the blood into your brain, potentially causing a hemhorrage. The maneuver human pilots use for a hard dive includes a half barrel roll just to avoid this, and it's still less effective than suddenly pointing down. The AIs used hard dives effectively to shed human pursuers.
This led to the insight that bodies (in some sense) were a missing component of any human-like AI, so they made Creatures to test the theory. The neural nets had virtual bodies with hunger, fatique, and pleasure/pain receptors; the player interacted with a god-like hand cursor that could pet them or spank them. For the 90s, it worked pretty well on Windows 98.
[1] https://www.youtube.com/watch?v=NzdhIA2S35w
That's what I'd suspect. A jet that's purpose built to be flown by a robot would likely be much more acrobatic, as you're designing for the engineering limits of the materials, rather than a human.
Regardless of the real answer, it's not making me feel like climbing into a jet cockpit and picking a fight with an AI.
On the bright side, horses are now spared much of the direct experience of modern warfare, so maybe humans will eventually follow.
On the flip side, if the oligarchy-with-social-policies was a knock-on effect of mass conscription during the napoleonic wars, a military with more AI suggests an oligarchy without social policies.
CN covid responders' celebration: https://www.youtube.com/watch?v=daVCbgNsVEM
UK covid responders' celebration: https://www.youtube.com/watch?v=4bQG9aB8dWY
(to be fair, I think the latter was celebrating 72 years of NHS, so the choice of a folkloric airframe was likely deliberate.)
The latency, though, would be the primary killer. It's why the Air Force still needs to send drone pilots to Afghanistan - you can have a person piloting the drone during the mission out of a container in Nevada, but they don't have enough reaction time to safely and reliably land the thing in Afghanistan. Control has to be handed off to a local pilot who has a lower-latency control link to the aircraft.
I could tell that the human pilot had to reorient himself after encounters, while the AI never needed to waste time grasping the situation and was able to take advantage immediately, every time.
Or introduce noise into the simulated state to reproduce real world sensor capabilities.
The inability to change and adapt to new technologies plagues the US airforce and navy. Resulting in both of them being completely miss-equipped for what any war that doesn't involve fighting impoverished herdspeople would look like today. Only a fraction of both's budget is spent on systems that are actually good at anything other than showboating and being profit centers for mil-contractors. The majority of their budgets are captured by outdated manmetal (surface ships/pilot'd aircraft) most of which would be expended very quickly if a conflict ever did break out.
God forbid we ever have another conventional war but if we do you can look forward to having a couple dozen hypersonic glide vehicles (costing pennies on the dollar) sinking entire carrier groups in minutes. Swarms of drones just flying past over-priced f-35's, letting them expend all their munitions (which probably cost more than the drones), and still making it to hit their intended targets.
But the US navy's still got some kinda ok submarines so they got that going for them, the airforce.... idk I guess they can keep talking about "stealth" (fancy paint) and how its effective at avoiding detection from the underfunded military's of 3rd world dictators.
Until it becomes completely accepted that flying robots kill human beings every other day... oh but wait. Already the case, never mind. It is okay I guess, as long as you are born in the country equipped with it.
I had an interview with Improbable in London, the CEO of their "Defense" research arm is a complete nutjob without any care for ethics or forward-thinking into the risk of what he is building. And that's a tech startup, I can't even imagine what's the mindset in the R&D labs of the old-school defence industry. Probably ridiculously entitled and self-congratulatory.
What I think I am looking for is a short Dungeons and Dragons style combat set of rules, and a blog post walking you through the issues.
I mean something like "Jet fighter A: can fire missile X with 50% chance of hitting other jet at range 200mi"
I remember a couple months ago, a video explaining the physics of a submarine being blown up showed up here. That was the first time I learned that a strategy for submarine warfare was to target an explosion under the center of the enemy sub, causing the water to rush up and creating a pressure differential that would crack the sub. Does that mean that is how sub warfare is done now? Or if that's the standard attack vector? Or that sub warfare would even be done with torpedoes now? No. The general public, and probably people not actively in the military with top secret clearance, only get bits and pieces of modern warfare strategy.
Edit: When I said the military might have those specs under lock and key, I meant my understanding is that the military is extremely compartmentalized and, aside from the highest ranking generals, there is almost no data aggregation on the details of what is capable.