Readit News logoReadit News
dusted · a year ago
Wow, that's amazing.. I think this is the first time I've felt sick to my stomach watching AI generated content.. A sadness rush over me. Because these look so good, like every soul-less super-high quality shovel-ware asset ever made.

I'm not putting it down, it truly is an amazing achievement, and it feels like it marks the end of hand-made assets. I don't even feel bad for the artists, I just feel bad for myself, because I want things made by people, for the inherent property that they were made by people. This is the same reason I don't care for procedurally generated games, I want to walk worlds that arose in the minds of others, not just worlds, if I wanted a procedually generated world that just exist for no reason in particular, I'd walk outside..

I don't want content, I don't just want stuff to sift through, I want pieces of art, made by my peers with their own visions, ideas, values, insights and personalities. They don't need to look that good, they just need to have been made with purpose.

jeffhuys · a year ago
Think of it this way: AAA games will now have to do something MORE than just "amazing graphics" in order to set themselves apart. Because if I'm honest, almost all of the newest games coming out is just the same gameplay + updated graphics.

Well, guess what, very soon even I could do that. So what do these studios have in store to make us come back to them?

nashashmi · a year ago
Very soon, the AI generator in your Xbox 2030 could upscale every object to incredible resolution.

This shifts the attention to story development and away from graphic designers. It does not mean cheaper games. It means more successful indie games with fewer team members. It also means fewer games because as I understand it right now, the only reason new games are pumped out is to keep the larger industry perpetually employed and other more time consuming projects funded enough to be developed.

qwertox · a year ago
> if I wanted a procedually generated world that just exist for no reason in particular, I'd walk outside..

I don't know why, but this comment reminded me of an experience I had a few years ago, when I started exercising outdoors. I rarely went outside prior to that and stayed in a relatively dark room.

One day I looked at the sky and thought: Wow, these clouds do look like the ones in video games, thinking of Horizon and Assassins Creed. This just pertaining the comment about the "procedurally generated" outside world.

While looking at the assets I also felt a bit of sadness. I was looking at the "Two-story brick house with red roof and fence." and was then thinking about how it reminded me of the three.js animation/keyframes example [0].

I asked myself if we will lose something very valuable. The three.js example was hand-crafted by persons, a real intention behind every choice made, while with Trellis it's just "poof, there it is", an amalgamation of all work found in the internet and possibly in games.

Some value will be lost through AI, but this makes handcrafted content even more valuable. The question is, if we will really value this enough for it to be sustainable for the artists.

[0] https://threejs.org/examples/#webgl_animation_keyframes

Kiro · a year ago
> I want to walk worlds that arose in the minds of others, not just worlds

A significant portion of game developers hate level design and the only reason they don't do procedural is because it's hard, so they are forced to build hand-crafted worlds. I'm one of those and I would find it pretty hilarious if anyone played my game thinking the levels "arose" in my mind, like I'm some kind of profound artist. I take great pride in other aspects of game development, but my level design is not one of those.

InDubioProRubio · a year ago
I poured ten years of my life into a open source game near nobody played. Screw all that handcrafted lovechild of artist-brain shit. My 2nd game was using store-assets and i was free to get things done.
dusted · a year ago
> A significant portion of game developers hate level design

And that shows, that really, really shows :)

Now they get to make even more of the soul-less trash in shorter time..

I'm not putting this tool down, it's an amazing technical achievement, and the results are absolutely mind-blowing, but, to me, it is what it is, and it's just like, my opinion, dude, not some statement of absolute truth.

You hating level design and wishing you didn't have to do it at all has absolutely no bearing on my wanting games where the assets are made by hand.

Conversely,me not wanting products made by people who don't like making them, should have absolutely no influence on you. I don't care if your passion is some other field of game creation, go do that, and have someone who enjoys level design do the levels, if you can't, well, then I guess you'll have to just accept that I might not want your game, and that's okay too, for both of us, you don't have to make something _I_ in particular like, and I don't have to accept your criteria for what I like.

I want, as an inherent property of the stuff I consume, a few things whose merits can be argued endlessly about, but I'm not arguing about their merit, my opinion, my criteria for selection is inherent property itself.

I'm not arguing whether there are any difference, I'm not arguing one is better than the other, I'm not arguing why one should be chosen over the other, I'm simply stating that among my selection criteria is that particular property of origin. It in itself, alone, nothing about it, just it.

I want movies recorded on actual film, not movies that look like it, inherent property not its merit.

I want books written by human minds, not books that "you can't prove was not".

I want paintings painted by pencils held in human hands, guided by human hearts and minds, regardless of whether I am looking at a photograph of that painting, the property of it's origin is important to me, not its merits of lack thereof.

So yeah, you can attack the merits of doing things one way or another all day long, but you don't get to say what I can an can not chose as my selection criteria.

danielvaughn · a year ago
If it makes you feel any better, the arena of human competition isn’t going to fundamentally change just because of this technology, IMO. Yes, we’ll see a flood of slop as it becomes more widely available. But the real artists, the ones who want to make things with purpose, will learn how to use this technology as a stepping stool towards something even greater.

Look at people like Martin Nebelong - they’re learning how to leverage AI without losing the human in the loop.

https://x.com/martinnebelong?s=21&t=cTpE-rRbCiocUlN0VaSheQ

kurokikaze · a year ago
It's a really good prototyping tool for those who cannot do 3D assets. Like visual scripting opened up the game development/modding for those not really familiar with the prigramming (Unreal Blueprints for example). So yeah, I'm okay with models I can throw into my prototypes without learning Blender/Maya/whatever. Sure, it may look uneven and strange but at least it's content.
elif · a year ago
What if 2,000 people from your community collaborated on an art piece that spoke to their own personal experience?

The artistic message would be disjointed, muddy, but indisputably an unmitigated human expression.

So you put an artistic director in charge of curating and unifying the collective work.

Still a human expression.

This is what AI represents, and what the prompt writer represents.

The data in LLM is undeniably human. Everything it "knows" is an extension of and exclusively composed of real human data.

The prompt writer has a choice how much of his own human input to prioritize, and how much raw humanity to allow spontaneously.

plasticeagle · a year ago
Do not worry.

Art is much more than pictures on your monitor. If you want pieces of art, made by your peers, visit your local galleries and buy it. I don't know who you are or where you live, but I'm willing to bet that where-ever it is those local galleries exist - and the artists that exhibit there would love to sell some of their work.

And you can be sure that human-made art will remain, and be valued, because art is what humans love to make most of all.

BenoitP · a year ago
> if I wanted a procedually generated world that just exist for no reason in particular, I'd walk outside..

Oops, old.reddit.com/r/outside/ is leaking again

airstrike · a year ago
Wow, these look amazing. I'm a layman, but I think this is what everyone's been thinking about ever since the first NeRF demos?

EDIT: I went looking for those threads and found my own comment wishing for this 5 years ago https://news.ycombinator.com/item?id=22642628

The next step is to automatically add "nodes" to the 3D images where the model can pivot, rotate and whatnot and then boom, you have on-demand animated, interactive content.

Feed it some childhood photos and recreate your memories. Add an audio sample from a loved one and have them speak to you. Drop into VR with noise-cancelling headphones for extra immersion. Coming soon! Click here to join the "Surrender Reality" waitlist

Kaijo · a year ago
>The next step is to automatically add "nodes" to the 3D images where the model can pivot, rotate and whatnot and then boom, you have on-demand animated, interactive content.

The next step is to generate models with higher quality mesh topology that allows animation and editing without breaking the mesh. I've done a lot of retopologizing and if I (or AI) were to rig these models as-is there would be all kinds of shading and deformation issues. Even without animating they are glaringly triangulated up close. But I suspect really high quality 3D asset generation is just around the corner because all you'd have to do is join up the approach seen here with AI quad re-meshing based on estimated direction fields and feature detection, which is also getting scarily good.

taikon · a year ago
Anywhere you'd recommend a hobbyist can learn more about remeshing or feature detection?
woctordho · a year ago
At this point maybe meshes are not the best representation for animation and editing. We can just use latents of neural networks
sangnoir · a year ago
> The next step is to automatically add "nodes" to the 3D images where the model can pivot, rotate and whatnot and then boom, you have on-demand animated, interactive content

My gut says a 3D engine + this would be a superior solution to the current approach of rendering rasterized video directly from the latents (coincidentally, Sora got released today).

It may not be tractable to train a network to rig and animate meshes, as well as setting up an entire scene to be a "digital twin" of random videos, bit I imagine such a set up would have finer-grained control over the created video while keeping everything else in it the unchanged

vunderba · a year ago
> "The next step is to automatically add "nodes" to the 3D images where the model can pivot, rotate and whatnot and then boom, you have on-demand animated, interactive content."

Well not really sure what you're talking about here wrt nodes (adding in arbitrary rotation/zoom sounds great in theory if all you're looking for is a lazy susan or spinning exorcist heads), but the next steps will likely be more around ensuring sane symmetrical topologies, better UV maps, and automatically building rigging (FK/IK) to allow for easy animation.

airstrike · a year ago
I meant rigging, but I'm a layman so I don't know the terminology. But yes, symmetrical models with simpler meshes and better UV maps would definitely be needed to make it work as I'm imagining it
movedx · a year ago
I'm interested to see how this affects 3D artists in game development studios. Will those studios use these tools and keep their artists, allowing them to push out more and more content, faster, and easier, or just keep a bunch of artists around, drop the other 80% of them, and use the tools to _replace_ those artists?
pfisch · a year ago
Last time I looked at these the lighting was in the textures, also the meshes were asymmetrical and insane. Not usable by a game dev studio.
DecoySalamander · a year ago
Studios don't have that many artists already - most of the "heavy lifting" is outsourced to asset production companies in China. I can see a future where these are replaced by AI and the main job of the in-house artists is to fix problems with the generated output.
8n4vidtmkvmk · a year ago
I hope they use it to create a bigger variety of assets. In lots of large games you start to notice where they've reused assets that could have used some more variation.
TacticalCoder · a year ago
> ... and then boom, you have on-demand animated, interactive content.

And in addition to that it's also useful for rending still pictures. 2D generated images by AI so far have incorrect lighting and many errors. Once it's a 3D scene rendered by something like Blender (which is free), it's going to look flawless: lighting is going to be correct (and configurable) and all the little details that are wrong are going to be easily fixed.

We already have insanely powerful tools and apparently from here it's only going to get way more powerful real quick.

baq · a year ago
As a newly minted 3d printer owner next step is accurate dimensions, material and nozzle diameter awareness ;) then some CAD-like support where you can specify constraints on… things?
spaceships · a year ago
This isn't anything parametric but I 3d printed a model from Trellis https://news.ycombinator.com/item?id=42375951
9cb14c1ec0 · a year ago
It's not perfect, but significantly better than most that I've tried. Every time I've tried a 3d model generator up to this point, the result was unbelievably bad. This time it was medium good. All, give me a file format I can drop right into Orca Slicer.
jjcm · a year ago
I'm impressed. I used layer diffusion to make this low poly airship: https://image.non.io/b3f843be-b1b4-468a-a0ec-9d58b191beee.we...

Which resulted in this: https://video.non.io/video-2732101706.mp4

Honestly not bad at all. Getting to the point of being able to use this as game assets.

hi_hi · a year ago
I tried an image of a F-117 stealth bomber from wikipedia. The output was a complete fail, to the point where I have no idea how they managed to generate the examples on their project page. The basic silhouette was completely inaccurate.

I was hoping you could upload several images from different angles to help it, but that doesn't appear to be a feature.

regularfry · a year ago
The F117 is weird. Unless you already know what it looks like, any single view from any particular angle is quite hard even for a human to extrapolate from. If it wasn't in its dataset then I can forgive it that, particularly because the angular nature of it means that it could easily be tripped up into thinking it's not looking at an aircraft at all.

I'm not saying anything about the quality of the model here - just that the F117 is almost certainly going to be an unfair test.

tarr11 · a year ago
Saw this submitted a few days ago [0], but it's a very impressive demo and would like to see it get discussed here.

[0] https://news.ycombinator.com/item?id=42342557

robinduckett · a year ago
I can see the potential, but the images I give it must be very far outside of its training because all it generates are weird flat planes
tosmatos · a year ago
I managed to get it to work with images that were looking down on the character / thing, like in an isometric game. Using any image that was facing the front was giving flat results
Etherlord87 · a year ago
Yea another miracle tool... Until you test it.
andybak · a year ago
I've been testing it and it's the best one I've tried so far.

It does have failure cases but the success rate is fairly high and when it works, the resulting meshes are reasonably usable (maybe not to game dev production standards - but that still leaves plenty of other use cases)

d0100 · a year ago
I just asked for low poly plant on Adobe Firefly, then uploaded it to Trellis

The result was pretty good for the mesh, at least 100x faster than having to do it from scratch

spyder · a year ago
It really depends on the image, but WOW I was really surprised that it reproduced animal fur with proper combination of polygon mesh and transparent texture, and this kind of capability isn't even demonstrated in the examples on the page.

https://imgur.com/a/qJp4HNX