Of course my trial version is nowhere near there, but watching several videos it appears that even the highest settings still are far from a proper high-poly export.
Since I work with 3D prints, that is a blocking issue for me, BUT the videos are all several months old.
Anyone with a more recent experience with Plasticity has something fresher information about that?
Very wise words. Coming from sw/hw industries I probably could work around heat pump microcontrollers without too much hassle and I well know the pain of physical components messing up your debug process. But such industries rarely rely on open source, and all the OSS I used was for personal projects. That is definitely a big limit for my future work opportunities! :/
Maybe some "slow" strategy game, that updates upon certain events but might remain unmodified for hours at a time? Or - more in general - an application that is required to be on for a long time but really doesnt' change often.
Why not train your own personal AI on your artwork? Corridor Digital did this in the latest attempt to automatise animation, they hired an illustrator to create an animation style for them, then trained the AI on their drawings.
1 - Since I'm either working for game companies or for my own project (https://fsd-wargame.com/) using AI-generated things is kinda damaging in terms of marketing. You never know when some uproar could arise against a project/game solely based on more or less petty outcries against AI. I generally sympathize with artists, but sometimes it's just whiny.
2 - My illustrations are line-art and cartography (https://www.artstation.com/thelazyone) , which are not the easiest to handle with AI. I'm sure that with enough effort there's gonna be a good model, but I haven't seen any so far.
I'm a sometimes-illustrator (but my style is pretty far from what Generative AI is doing), and I recently published a 1.1 of a game manual which uses Midjourney images. I'm currently investing in a "proper" illustrator because the MDJ images lack character, but it's also true that in a few months from now this might change: I'll stick with the illustrator to have more consistency in the images, but probably the AI could do a fancier job there.
Besides, the "things will change in 2 months" point is a good one, but it's been used since a year and a half and things haven't changed yet. Sure, the quality of the produced images improved, but not in a qualitative scale.
Side note: the link civitai to leads to https://sambleckley.com/writing/civitai.com/images which is a dead link.
Probably if we had materials with a billionth of the resistance of silver they would work, but we haven't. And we have superconductors, luckly. :)
The second solution in particular is fascinating, although it doesn't offer a good solution to generate a seamless heightmap in the current state. I guess that combining that with some Perlin noise to determine which areas get to have starting seed points for the ridges would work?
Thoughts?
Special mention to the remarkable work of graphics and rendering that is behind the video. Some of those 1-second transitions imply a considerable amount of custom code, from the corrosion examples to the various overlaying heightmaps.