I know the potential audience is most likely a lot smaller than Blender but we're really struggling to grow the volunteer community around Open Brush (the open source fork of Google's Tilt Brush).
I was expecting it to grow organically but it's actually gone quiet recently - despite continuingly healthly download and usage numbers.
If anyone has any suggestions that don't involve me spending all my time on community building or PR then I'd love to hear them.
Consider detaching your GitHub repo as a fork of tilt-brush.
Whenever I come across a fork on GH, my first assumption is that the fork is aiming to be merged back into the main repo in the future, and my second assumption is that the maintainers of the fork have less of an interest in the project than the original project’s maintainers. It’s a signal of lower quality IMO. You should keep mention of the original project in your docs, but I personally don’t think it’s necessary to keep the repo as a fork.
That's a really interesting idea. There would be costs - We'd potentially be less discoverable for people forking the original. I'd have another place to check for interest potential forks (and therefore contributors).
You might just be too early right now. VR headsets just aren't very popular. Personally, I find the hardware atrocious and am waiting for someone like Apple to "do it right". As a late adopter of VR, I may not be your target audience, so take what I say with a grain of salt.
I was surprised you didn't have any videos of interacting with OpenBrush on your home page. Normally, I would have clicked away if I stumbled on that page, but since you posted on HN I searched it up on YouTube.
Do you and your volunteers use Open Brush regularly? If so, toss up some casual live streams on Twitch or other platforms. Post on Twitter and Discord, etc. before you go live, then upload the recording to YouTube for people to watch later.
Part of what makes Blender so damn accessible is the huge number of YouTube tutorials. Virtually every feature of Blender has at least one high quality tutorial video, walking people through every step.
Personally I was always assuming that any potential contributor was already fully aware of Tilt Brush. I find it hard to imagine that someone would be far enough removed from our scene that they would need to be informed about what Tilt Brush is about, but engaged enough to want to contribute. Am I wrong in this? Tilt Brush has pretty decent mindshare for anyone interested in VR content creation.
Another problem is that I just don't really want to spend my time making videos. Plenty of other people make videos about Open Brush / Tilt Brush. It just doesn't turn into "increased engagement from potential volunteers".
I really just want to code new features. Everything else is a distraction that I do out of neccesity.
I'm aware of tilt brush and would use it but I've still yet to have prioritized setting aside the money to even buy a VR headset, I don't much care for the companies doing it right now, and I'm not interested in just having a phone do it either. I keep hoping for the tech to improve and branch out beyond what it is right now.
Got a Quest2 last July, right around time that Google was killing their Icosa-type-thing.
OpenBrush was imo one of the best apps for it and I wanted to start playing around with the code, but had some friction even getting a build started and eventually lost interest. Installed several SideQuest builds, of course, but after I couldn't get a working distributable with built-in Icosa even all the way into September, I got annoyed and I haven't started the app or looked at the project since.
Just getting the Oculus dev env set up to the half-assed extent I did, it's risky, dubious, and feels bad. I never know which agreement I might accidentally click that'll allow Zuckerberg to Quest2 into my house backwards, or whatever.
I guess this is probably a lot less helpful than I thought it would be when I started typing. Sorry!
Thanks. This comment prompted a discussion on how to improve the initial developer experience. The readme definitely needs a tidy up. We tend to assume that anyone interested in building it will be a moderately experienced Unity developer - but I think it's worth reexamining this assumption.
(It's a brave soul who tackles a codebase of this complexity as their first Unity project! But it was my first big Unity codebase and it was a great way to get myself out of the training pool)
That's unfortunate, because this is such a killer app for VR-based creation.
I expect advanced 3D workflows of the future to look less like Blender and Unreal Engine and more like Tilt Brush.
Full degree of motion of both hands is so liberating. Plus it's fully immersive. When VR UIs improve around knolling and contextual tooling, this will become more obvious to people.
I agree to a large extent. Although not all interactions have been cracked for VR and there are things I'd still prefer to do in pancake mode with a keyboard and mouse.
But anything spatial - most definitely. On the whole trying to arrange things fluidly in 3D space via a 2D monitor is like typing with gloves on.
They are closed source and don't participate in the community any more. We're working on our own multiplayer functionality that will hopefully be less buggy and more flexible.
AR/VR is a headache. VR goggles are proprietary and locked-down, laggy, and make you dizzy. Your eyes try to adjust FoV when there is no adjustment needed.
This is largely a function of both frame rate and IPD.
The latter is one of the reasons why I feel Meta has done an absolutely huge disservice to VR adoption by making the Oculus series fixed IPD. They claim to be able to software compensate, but my experience is that IPD on a headset being even just a millimeter or two physically off makes the difference between a comfortable VR experience and one that leaves me with a headache after the fact.
The fact that they reduced screen refresh rates on the Oculus S and the original Quest didn't help at all either.
I've had an Index since shortly after they became available and can use it hours at a time without any sort of discomfort provided my GPU is able to serve up enough frame rate for a particular title. Available GPU power is holding back VR currently more than anything, IMO.
It never ceases to amaze me how much improvements Blender is getting all the time. I've never used it much, simply because I got burned in the past by the (legacy) esoteric UI, but it has increasingly become a joy to get started with and just noodle around with.
There are other cool open source tools of course. Blender could've easily remained another one of those hard to use niche applications, but they've definitely managed to transcend that.
Honestly hats off to the Blender dev team for taking their technically impressive but extremely tricky to use software and managing to over just a few years turn it into the one piece of 3D software everyone is suggesting to beginners and the fact it's free isn't even what you mention because the software is just great.
Really showing the great user experience on open source projects is possible and that the benefits are worth it.
I have trouble keeping up with the updates lately. To be honest, compared to quite a few CAD programs common in mechanical engineering, Blender doesn't have to hide here. Quite the contrary indeed.
Blender is amazing. The other day, I needed to create a video of a rotating image so I looked around for potential software to use. Very quick and dirty job.
I didn't want to download anything large like Davinci Resolve. I saw OpenShot but was not sure it can do what I needed easily. It's not even lightweight.
Then someone mentioned that Blender can do video editing. Can you believe that blender is only around 200mb? I downloaded it, followed a quick tutorial on YouTube to figure out keyframes and how to render. 10 minutes after and I'm done.
It's great to see even more features to the video sequencer in this update. Will probably use again for my next video editing needs.
I love Blender but NLE functionality still isn't there yet. Basic things like keeping audio in sync with video breaks all the time. I had to switch to Resolve for my little projects. I will continue to evaluate Blender, but for now it still isn't ready for primetime.
Agreed about the NLE. I tried using blender (iirc 2.9) to make a family vacation compilation and it was a lot of work and not very intuitive. I barely do any video editing though so I may just not have had a good workflow. I wasn't able to find very good workflows either, I'm assuming because blender was still awkward for this use case and others usually reach for another tool.
I'm very much looking forward to further improvements and judging from the breakneck pace of features and how much love the project gets, I'm very optimistic about the future!
I had some issues with audio syncing to variable FPS video. Which is mostly used to save some space and battery when taking videos on consumer cameras, but probably something that profesional movie makers do not use at all. Still it would be nice for home video makers if that thing just worked out of the box.
To be honest, the video sequencer needs heavy refactoring. Yes, it can be used, but the UI isn't the best. Text support is very basic, so for titling it's not very useful (text is a general issue with blender). Also unfortunately, one cannot use shaders directly in the video sequencer to create their own filters for instance. Same limitations with the compositor (which isn't GPU accelerated so it is really slow).
There really needs to be a "the architecture of open source applications" type writeup about Blender.
However it is that it is architectured internally seems to have helped it grow over time and not collapse under the weight of 30 years of hacks and poor decisions.
One of their very early decisions was extremely good in my opinion. Each blender file is saved with a complete schema at the start which basically describes a bunch of C structs, and has a marker for byte order. This means that .blend files can be backwards and forwards compatible, and in the most common case of the schema matching your memory layout, can have their data structures copied directly into memory and then only pointer patching is required. It's quite remarkable.
The lack of corporate pressures to just get features out the door and clean them up later (never) I assume is a huge factor. I agree, Blender releases new capabilities at such a rapid clip that there's certainly a solid base on which they're building that we could all learn a few lessons from.
> However it is that it is architectured internally…
Countless hacks hanging off one giant MVC.
And having a few core devs with veto powers has helped a bunch with it never getting bogged down with bad decisions. I tried to get some iffy stuff tacked on but they (mostly Campbell) would be “umm, what’s this good for?” And don’t even think about adding a null check to prevent a segfault without getting permission first, like, signed in blood…in triplicate.
the blender development wiki is already really good plus I think there's already a book on working in the blender code base 'core blender development' I think
I spent a lot of time during COVID learning Houdini as a hobbyist, because I really liked the concept of procedural modeling and node based development. However, with the continued iterations on Geometry nodes, it feels like a foregone conclusion that Blender will replace Houdini for hobbyist procedural artists. Cycles is a really nice GPU renderer and I don’t have to pay a subscription like I do with Redshift.
Houdini still has some strong advantages built over decades such as dynamics and KineFX and is the industry leader for FX. But I wouldn’t be surprised if these tools appear in Blender in a future version, though.
IMO geometry nodes are a bit lacking atm. No loops, no procedural uv unwrapping, no compact maths expressions, bare bones standard node selection in general etc. It feels like they've been concentrating on particle system type use cases at the expense of things like procedural architecture. However, its still a relatively new feature & I'm excited to see where it will be in a couple of years time.
The great thing about blender 2.8+ is that it tends to be good enough in a lot of different areas even if it isn't best in class in any one area. If it can get to that state with geometry nodes that would be massive.
I was reading this article yesterday: 'The best 3D modelling software in 2022'
As a blender studio subscriber & hobbyist, I'm so grateful such incredible software has no barrier to entry, especially for creatives in developing countries.
Zbrush and Houdini are still in their own category, untouchable though. Blender is getting close to the sheer pleasure that is sculpting in Zbrush, but there's still quite a ways to go. And Houdini's particle work is unequaled.
For Houdini, you listed the price for the version used by major VFX studios.
Houdini Indie (with no feature limitations) is only $269 per year, and you can also learn Houdini for free too with Houdini Apprentice (again: no feature limitations).
Independent VFX artists earn well under the $100K/year revenue ceiling for Houdini Indie, so $269/year is their actual "cost."
AMD HIP support on Linux now works without proprietary drivers on RDNA2 cards! My 6700xt is great as all I had to do was just install hip-runtime-amd package on Debian from AMD's ROCm repo¹. I am glad they are putting the work into supporting AMD hardware and I no longer have to use opencl and be stuck on older Blender versions.
dreaming for the day blender has more support for 2d animation. software like toon boom harmony are way too expensive for hobbyist work and there aren't many alternatives for that paper cutout type of animation.
I was expecting it to grow organically but it's actually gone quiet recently - despite continuingly healthly download and usage numbers.
If anyone has any suggestions that don't involve me spending all my time on community building or PR then I'd love to hear them.
https://openbrush.app/
https://github.com/icosa-gallery/open-brush/
Whenever I come across a fork on GH, my first assumption is that the fork is aiming to be merged back into the main repo in the future, and my second assumption is that the maintainers of the fork have less of an interest in the project than the original project’s maintainers. It’s a signal of lower quality IMO. You should keep mention of the original project in your docs, but I personally don’t think it’s necessary to keep the repo as a fork.
You can use the GitHub virtual assistant to request this: https://support.github.com/request/fork
I am pretty sure that issues, PRs, stars, and everything should be preserved, but don’t take my word for it.
I'll give it some thought.
I was surprised you didn't have any videos of interacting with OpenBrush on your home page. Normally, I would have clicked away if I stumbled on that page, but since you posted on HN I searched it up on YouTube.
Do you and your volunteers use Open Brush regularly? If so, toss up some casual live streams on Twitch or other platforms. Post on Twitter and Discord, etc. before you go live, then upload the recording to YouTube for people to watch later.
Part of what makes Blender so damn accessible is the huge number of YouTube tutorials. Virtually every feature of Blender has at least one high quality tutorial video, walking people through every step.
Personally I was always assuming that any potential contributor was already fully aware of Tilt Brush. I find it hard to imagine that someone would be far enough removed from our scene that they would need to be informed about what Tilt Brush is about, but engaged enough to want to contribute. Am I wrong in this? Tilt Brush has pretty decent mindshare for anyone interested in VR content creation.
Another problem is that I just don't really want to spend my time making videos. Plenty of other people make videos about Open Brush / Tilt Brush. It just doesn't turn into "increased engagement from potential volunteers".
I really just want to code new features. Everything else is a distraction that I do out of neccesity.
I'm aware of tilt brush and would use it but I've still yet to have prioritized setting aside the money to even buy a VR headset, I don't much care for the companies doing it right now, and I'm not interested in just having a phone do it either. I keep hoping for the tech to improve and branch out beyond what it is right now.
Deleted Comment
OpenBrush was imo one of the best apps for it and I wanted to start playing around with the code, but had some friction even getting a build started and eventually lost interest. Installed several SideQuest builds, of course, but after I couldn't get a working distributable with built-in Icosa even all the way into September, I got annoyed and I haven't started the app or looked at the project since.
Just getting the Oculus dev env set up to the half-assed extent I did, it's risky, dubious, and feels bad. I never know which agreement I might accidentally click that'll allow Zuckerberg to Quest2 into my house backwards, or whatever.
I guess this is probably a lot less helpful than I thought it would be when I started typing. Sorry!
(It's a brave soul who tackles a codebase of this complexity as their first Unity project! But it was my first big Unity codebase and it was a great way to get myself out of the training pool)
I expect advanced 3D workflows of the future to look less like Blender and Unreal Engine and more like Tilt Brush.
Full degree of motion of both hands is so liberating. Plus it's fully immersive. When VR UIs improve around knolling and contextual tooling, this will become more obvious to people.
But anything spatial - most definitely. On the whole trying to arrange things fluidly in 3D space via a 2D monitor is like typing with gloves on.
Sadly, .NET/Mono apps not so good for Linux.
C# isn't the problem. Unity takes care of that.
https://pageviews.wmcloud.org/?project=en.wikipedia.org&star...
AR/VR is a headache. VR goggles are proprietary and locked-down, laggy, and make you dizzy. Your eyes try to adjust FoV when there is no adjustment needed.
This is an art program for (mainly) individual users and predates the metaverse hype by multiple years.
I dislike the metaverse hype as much as you but I love content creation tools. There's no need to use any mention of VR to rag on that.
I also mentioned our usage numbers are very good so your point doesn't even really make sense.
This is largely a function of both frame rate and IPD.
The latter is one of the reasons why I feel Meta has done an absolutely huge disservice to VR adoption by making the Oculus series fixed IPD. They claim to be able to software compensate, but my experience is that IPD on a headset being even just a millimeter or two physically off makes the difference between a comfortable VR experience and one that leaves me with a headache after the fact.
The fact that they reduced screen refresh rates on the Oculus S and the original Quest didn't help at all either.
I've had an Index since shortly after they became available and can use it hours at a time without any sort of discomfort provided my GPU is able to serve up enough frame rate for a particular title. Available GPU power is holding back VR currently more than anything, IMO.
There are other cool open source tools of course. Blender could've easily remained another one of those hard to use niche applications, but they've definitely managed to transcend that.
In the US the commercial competitors would have sued the government for anti-competitive practices if they had sponsored Blender.
Can you give an example of where this happened?
Dead Comment
Really showing the great user experience on open source projects is possible and that the benefits are worth it.
It has always had a great community, and made amazing development progress.
Version 2.5 was a big ui shift at the time.
These days with 2.8 then 3, they have managed to make some amazing progress, in a much faster turn around time.
I commend Ton's direction and the amazing work by the devs :)
Deleted Comment
I didn't want to download anything large like Davinci Resolve. I saw OpenShot but was not sure it can do what I needed easily. It's not even lightweight.
Then someone mentioned that Blender can do video editing. Can you believe that blender is only around 200mb? I downloaded it, followed a quick tutorial on YouTube to figure out keyframes and how to render. 10 minutes after and I'm done.
It's great to see even more features to the video sequencer in this update. Will probably use again for my next video editing needs.
I'm very much looking forward to further improvements and judging from the breakneck pace of features and how much love the project gets, I'm very optimistic about the future!
https://www.olivevideoeditor.org/
It's small, opens fast, and does a good job.
But hey, it's free.
However it is that it is architectured internally seems to have helped it grow over time and not collapse under the weight of 30 years of hacks and poor decisions.
Link to AOSA for others' convenience: http://aosabook.org/en/index.html
Countless hacks hanging off one giant MVC.
And having a few core devs with veto powers has helped a bunch with it never getting bogged down with bad decisions. I tried to get some iffy stuff tacked on but they (mostly Campbell) would be “umm, what’s this good for?” And don’t even think about adding a null check to prevent a segfault without getting permission first, like, signed in blood…in triplicate.
Houdini still has some strong advantages built over decades such as dynamics and KineFX and is the industry leader for FX. But I wouldn’t be surprised if these tools appear in Blender in a future version, though.
The great thing about blender 2.8+ is that it tends to be good enough in a lot of different areas even if it isn't best in class in any one area. If it can get to that state with geometry nodes that would be massive.
As a blender studio subscriber & hobbyist, I'm so grateful such incredible software has no barrier to entry, especially for creatives in developing countries.
You can see this reflected in this great video of the worldwide blender community from last years blender conference https://youtu.be/uEjmbsiflMU?list=PLa1F2ddGya_8Wzpajwu1EtiS8...
https://www.creativebloq.com/features/best-3d-modelling-soft...Houdini Indie (with no feature limitations) is only $269 per year, and you can also learn Houdini for free too with Houdini Apprentice (again: no feature limitations).
Independent VFX artists earn well under the $100K/year revenue ceiling for Houdini Indie, so $269/year is their actual "cost."
¹ https://repo.radeon.com/rocm/apt/debian ubuntu main