Hello everyone, I would like to see if there is any interest in this little project that I have been working on for the past few years.
Could be relevant, seeing the direction in which the mainstream game engines are going.
I didn't really like any of the already existing options, so I tried to make my own and it turned out to be easier than expected.
It's sort of like a low-budget Unreal/Source, but with open-world streaming support and it is free and open source. Very old-school. But optimized for more modern hardware. Very fast too.
Still not production ready, but it seems like it is mostly working.
I want to finish a few larger projects with it to see what happens.
Btw, the name is probably temporary.
1. Affordance: A lot of people, especially from 3rd world countries are very poor and can't afford to buy hardware to run Turbobloat.
2. e-Waste: Producing computer chips is very bad on the environment. If modern software wasn't Turbobloated you would buy new hardware only when the previous hardware broke and wasn't repairable.
3. Not putting up with Turbobloat: Why spend money on another computer if you already have one that works perfectly fine? Just because of someone else's turbobloat? You could buy 1000 cans of Dr. Pepper instead."
Took the words from my mouth. What a great project. Please keep posting your progress.
Still, higher resolutions were not just invented because of Turbobloat.
This was just a joke from the site, I actually took serious!
There is no 800x600 limit.
I fear the turbobloat is still with us.
From context, I interpret it to be ‘graphics tech I don’t like’, but I’m not sure what counts as turbobloat.
If you're making a game that needs those features, obviously you'll need to bloat up. If you're not, maybe this SDK will be enough and be fast and small as well.
Deleted Comment
Of course for entertainment it’s difficult to judge, especially when you may have more fun on an old gameboy than a brand new 1000W gaming PC.
This is doing a lot of heavy lifting in this sentence.
What you're talking about is called the embodied energy of a product[0]. In the case of electronic hardware it is pretty staggeringly high if I'm not mistaken.
[0] https://en.wikipedia.org/wiki/Embodied_energy
Just want to say this line was great, very Terry Pratchett. Feels like something Sam Vimes would think during a particularly complex investigation. I love it and hope you keep it moving forward.
Haven't gotten a chance to mess around with it, but I have some ideas for my AI projects that might be able to really utilize it.
The project looks awesome though.
"Temporal" to mean that at any given slice of time during a running application all objects have a signature that matches a type.
Yet most programming languages only allow compile-time analysis and "runtime" is treated as monolithic "we can't know at this point anything about types"
And that Vetinari’s entity component system might seem complicated but it works, damnit and it makes the city function.
(I'm just glad someone got the reference)
Is the problem here that using a nodal editor encourages/incentivizes you through its UX, to assign properties and relationships to e.g. a `Vector` of `Finger`s — but then you can't actually write code that makes the `Vector<Finger>` do anything, because it is just a "collection of things" in the end, not its own "type of thing" that can have its own behavior?
And does "everything is an Entity, just write code" mean that there's no UX layer that encourages `Vector<Finger>` over just creating a Hand class that can hold your Fingers and give the hand itself its own state/behavior?
Or, alternately, does that mean that rather than instantiating "nodes" that represent "instances of a thing that are themselves still types to be further instantiated, but that are pre-wired to have specific values for static members, and specific types or objects [implicitly actually factories] for relationship members" (which is... type currying, kind of?), you instead are expected to just subclass your Entity subclass to further refine it?
“Also when creating things with nodes, you have to go back and forth between node GUI and code.”
You can see Godot’s Node/GDScript setup as a bit of a response to this argument. Or, they try to make the “going back and forth” as seamless and integrated possible with things like the $ operator and autocomplete.
That said, I do think at the end of day, the “thing is a thing” mindset ultimately prevails, as you have to ship a game.
trying to wrap my head around using scenes vs. nodes in something simple like a 2d platformer.
Platforms:
My thinking: I'm gonna be using a ton of platforms, so it'd make sense to abstract the nodes that make up a platform to a scene, so I can easily instance in a bunch.
Maybe I'm already jumping the gun here? Maybe having a ton of an object (set of nodes) doesn't instantly mean it'd be better off as a scene?
Still, scenes seem instinctually like a good idea because it lets me easily instance in copies, but it becomes obvious fast that you lose flexibility.
So I make a scene, add a staticbody, sprite, and collision shape. I adjust the collision shape to match the image. Ideally at this point, I could just easily resize the parent static body object to make the platform whatever size I want. This would in theory properly resize the sprite and collision shape.
But I am aware it's not a good/supported idea to scale a collision shape indirectly, but to instead directly change its extents or size. So you have to do stuff based on the fact that this thing is not actually just a thing, but several things.
This seems like a bad idea, but maybe one way I could use scenes for platforms is to add them to my level scene and make each one have editable children. Problem with this is I'd need to make every shape resource unique, and I have to do it every time I add a platform. This same problem will occur if I try duplicating sets of nodes (not scenes) that represent platforms, too. Need to make each shape unique. That said, this is easier than using scenes + editable children.
Ultimately the ‘right’ way forward seems to be tilemaps, but I wanted to understand this from a principles perspective. The simple, intuitive thing (to me) does not seem possible.
When I ask questions about this kind of stuff, 9/10 times the suggestion is to do it in a paradigmatic way that one might only learn after spending a lot of time with an engine or asking the specific question, rather than what I would think is a way that makes dumb sense.
Sharing behaviors or making things look or act like a little bit like this other thing becomes an absolute nightmare, if not out right impossible, with "a thing is a thing."
There's a reason graph based systems or ECS is basically the corner stone of every modern engine. Because it works and is necessary.
The Half-Life and Morrowind engines are in a unique situation where they're put together by enthusiastic programmers who are paid to develop stuff they think is cool. You end up with minimal engines and great tech, suited to the needs of professional game developers.
This seems like something that sits in between a raylib and a Unity. I haven't used it, but I worry that it's doesn't do enough to appeal to amateur programmers, but it does too much to appeal to the kind of programmer who wants a smaller engine. I could be very wrong though, I hope to be very wrong. Seems like the performance here is very nice and it's very well put together. There's definitely a wave of developers coming out frustrated from Unity right now. As the nostalgia cycle moves to the 2000's, there's a very real demand to play and create games that are no more graphically complex than Half-Life 2.
Anyway, great project. Great web design. Documentation is written in a nice voice.
When you're designing both you can take advantage of features you add but also avoid the ones you can't do well - or even change the art style to "fit" the engine - pixelated angular mobs fit Minecraft quite well, but once they start getting more and more detailed you're in an "uncanny valley" where they look worse and more dated than Minecraft - until you finally have enough polygons to render something decent.
My argument was mainly about these more generalized engines, like raylib, 'Tramway', or Source.
"At work if we want to experiment with a new idea I have to assembly a team, and spend at least a month before we have something we can work with. Meanwhile, at home, I can make a whole Doom campaign in one evening."
(quoting from memory, sorry)
https://store.steampowered.com/curator/42392172-GZDoom-Games...
That is to say, I don't think people are using Unity because they were mistaught by complexity loving professors.
I wonder if this kind of architecture might also be a pretty good approach. The fact that they were able to port the game to another engine within a day is pretty impressive.
[0] https://xcancel.com/unormal/status/1703163364229161236
I've never played the game, but my understanding is that Slay the Spire largely impresses on a design and artistic front, not a technical one. Its engine requirements were not based on feature set or code quality, but on what developers knew. So they probably picked Unity because it was ubiquitous. Education starts the problem, and then devs who need something common they care hire for continue the problem. I don't blame devs for this, it's the right choice to make and obviously Slay the Spire is great, but I am saying that this is a force that drives down the quality of game engines.
Look at what Epic Games did with fortnite. They killed a competitive scene game that ran smooth for turbobloat graphics and skins.
There is a similar phenomenon with ArcGIS.
Very cool project. And the website design is A+
I feel like this is only true for people who happened to luck out with slightly overpowered hardware in very specific time periods.
As someone who used pretty average hardware in the windows 98/2000/xp era as a teenager even a low end modern laptop with an ssd running Windows 10/11/KDE/Gnome/Whatever is massively more responsive even running supposedly bloated webapps like vscode or slack.
So, what does it mean? Just "very bloated"?
Edit: Reading around on the website and seeing more terms like "Hyperrealistic physics simulation" makes me believe it just means "very bloated".
If you gave it to me in a cleanroom and told me I had to share my honest opinion, I'd say it was repeating universally agreeable things, and hitching it to some sort of solo endeavor to wed together a couple old 3D engines, with a lack of technical clarity, or even prose clarity beyond "I will be better than the others."
I assume given the other reactions that I'm missing something, because I don't know 3D engines, and it'd be odd to have universally positive responses just because it repeats old chestnuts.
Certain vintage hardware had a "turbo" button to unleash the full speed of the newer CPUs. The designers blind to the horrors of induced demand.
This seems to be an increasingly common point of view among those of a certain age.
It is definitely the case that the art of a certain sort of texture mapping has been lost. The example I go back to is Ikaruga, where the backgrounds are simply way better than they have any right to be, especially a very simple forest effect early on. Some of the PS2 era train simulators also manage this.
The problem is these all fall apart when you have a strong directional light source like the sun pointed at shiny objects, and the player moves around. If you want to do overcast environments with zero dynamic objects though you totally could bypass a lot of modern hacks.
Seriously, the plot of Silent Hill was invented to justify optimization hacks, you have a permanent foggy space called "fog space" to make easier to manage objects on screen, and the remake instead stupidly waste a ton of processing trying to make some realistic (instead of supernatural looking) fog.
The point about Lumen stands though. Baked lighting would have been much better in this case.
The ‘art’ of making stuff look good has not been lost at all. It’s just very unevenly distributed.
When a team has good model makers and good texture artists and good animators and good visual programming, it looks great, whether it’s built in Unreal or Unity or a bespoke engine or whatever.
There are a lot of technically polished Unity titles that get knocked because they look like very well rendered plasticine, for want of a better description.
For example, there was an argument on here not too long ago where various people pushing the “old graphics were better” (simplification) did not understand or care that the older titles had such limited lighting models.
In the games industry I recall a lot of private argument on the subject of if the art teams will ever understand physically based models, and this was one of the major motivations for a lot of rigs to photograph things and make materials automatically. (In AAA since like 2012). The now widespread adoption of the Disney model, because it is understandable, has contributed to a bizarre uniformity in how things look that I do think some find repulsive.
Edit to add: I am not sure this is a new phenomenon. Go back to the first showing of Wind Waker for possibly the most notorious reaction.
Ironically as we've gotten hardware with more VRAM and higher bus speeds we've decided to go with bigger textures instead of more of them. The same with normal mapping, instead of using normal mapping alongside more subdivided models we've just decided that normal maps are obsolete and physically modelling all the details is technologically forward way. Less pointy spheres is one thing, but physically modelling all the cracks and scrapes on the sphere is just stupid and computationally wasteful.
This right here is precisely what I alluded to in another reply as the motivator for generating meshes and PBR materials from controlled photography. Basically you now have enough parameters per texel, which interact in distinctly unintuitive ways, that authoring them is a nightmare, hence people resorting to what you describe.
Even a "2D" game like Factorio has amazing polish difference between original release, 1.0, and today.
(This can very obviously be seen with modded games, because the modded assets often are "usable" but don't look anywhere near as polished as the main game.)
> Also when creating things with nodes, you have to go back and forth between node GUI and code.
> All of the mainstream engines have a monolithic game editor. It doesn't matter how many features you use from it, you still have to wait 10 minutes for all of them to load in.
These notes really resonated; the debug loop even with Godot, using minimal fancy features, felt a lot slower than other contexts I've programmed in. Multiple editors working around a single data file spec is also a cool idea! In finding that a unified IDE makes it easier for different developers to create merge conflicts, I could see having editors of a more specific purpose may also help developers of different roles limit the scope and nature of their changes. Keen to see how the engine progresses!
Managed to contribute my bit from an underpowered netbook.
I had never written a line of C# before, but I'll be damned if I'm going to concede TDD from the CLI. I knew it could be done, and I made it work. Everybody thought I was crazy, though, and none of the sponsors' DevRel were any help.
And, of course, the biggest point of friction for us, that weekend, was our beefiest machine still had to boot and reboot the damned Unity IDE for a thousand years! Incredible the fetters some folks tolerate.
I like the C++ principle of paying only for what you use.
The RPG engine was just an example of why it may not be such a universal thing, I'm not saying it's bad - but clearly you think that is not "bloat" whereas to some it might be. So it's maybe better to head this off at the pass and just write a little paragraph with some examples of bloat you have observed in other engines that you have consciously avoided in Tramway.