I love Blender, but have been stuck on version 2.7 due to what I can only describe as some sort of icon dyslexia.
For 2.8, all the Blender icons were replaced with monochromatic ones. This is a very popular trend and a lot of programs are replacing their icons in this way, so it's obviously fine for most people, and I realize this is probably a niche accessibility need I have.
But, to use the new icons, I find I have to check each one every time to find the one I want. Instant recognition is no longer possible. For me, this extra cognitive load makes it difficult to use Blender for more than a few minutes at a time.
Blender is a fantastic product and one of the best examples of what open-source can be, but I for one will be appreciating it from afar, and remaining on 2.7 until there is a way to get more usable icons.
> This is a very popular trend and a lot of programs are replacing their icons in this way, so it's obviously fine for most people, and I realize this is probably a niche accessibility need I have.
It's not a niche accessibility need, it's a universal accessibility need that's been commonly understood for decades. Insufficient differentiation is one of the factors that originally drove the increase of colour count (and later, as display hardware allowed it, resolution) in icons 30-ish years ago.
This trend is not driven by universal preference for monochromatic icons but by the cargo cult that UX has become.
This is only true if we pretend that UI design is largely about button-level optimization. Clearly it needs to work on the macro level as well, and it's not farfetched to assume that optimizing every button, icon and text label for their individual local maxima will result in an application that's overall too cluttered for anyone but the most experienced users.
There were some significant reorganizations from 2.7 -> 2.8, 2.8 -> 2.9, and again from 2.9 -> 3.0. It's upsetting to see this much churn over this short of a time for Blender.
Blender has always been very keyboard driven, but I don't use it often enough to have most of the shortcuts memorized. Even those seem to have changed, though. I'm sure there is a setting in the preferences to restore the old shortcuts, but it doesn't really feel like an appropriate thing to do with such a complex piece of software.
I know Blender has a "reputation" for being difficult to use. It used to be earned, over a decade ago, and it was indeed an impediment to getting new users. I felt like--as even just a casual user--Blender had hit a good sweet spot of organized, predictable, and powerful. It's just the very nature of 3D modelling to be complex, and it feels like most complaints about its interface these days are just from completely new people who don't understand this.
That seems to be one of the dangers of Free Software. Having no monetary barrier to entry, you get a LOT more first-time users. Without the sunk cost of having spent several thousand dollars on 3DS Max or similar, they don't have an emotional incentive to stick through the learning curve yet.
I mean you could always swap out the assets and rebuild. The icons are compiled into the binary so there’s no easy way to swap them out without recompiling blender.
Wow! I thought it was just me. I find it incredibly difficult to read icons. Even small design changes throw me - the most annoying was the change in speaker icon across versions of Windows. The bigger the icon the less of a problem it is but those fiddly toolbar icons throw me all the time. I have mentioned this to a few folk but honestly thought I was on my own.
I too cannot deal with monochromatic icons. FYI, 2.9 did (re-)introduce color to many of the UI icons, which made it much easier to use for me than 2.8.
> The general guideline will be to keep Blender functionally compatible with 2.8x and later. Existing workflows or usability habits shouldn’t be broken without good reasons – with general agreement and clearly communicated in advance.
Oh boy. How I wish so much that other software developers would follow this principle... It seems nearly every software I use and rely on has to change its appearance and interface every 6-12 months, breaking familiarity for no objective reason, and simply because "it looks better" to look at (and not necessarily to use!) to the subjective eyes of someone.
I wish they didn't. A lot of very popular software is stuck with counter-intuitive interfaces that pose huge entry barrier for anyone approaching them. Only for the people who learned 20 year old idiosyncrasies to feel at home.
The question I always ask myself as an outsider, is this actually weird and outdated, or is it something that, once you get used to, actually makes people able to work more optimally. Sometimes those power tool design decisions are just bad old decisions, sometimes they really do enable the user. Look at Vim, not for everyone, but if you are willing to learn to invest in its crazy, specific style of user interface people can fly in a way other interfaces don't seem able to quite keep up with.
Well blender just did that exact thing going from 2.7 to 2.8 recently, although it was at least somewhat justified. But now that they've reworked it it doesn't make sense to do it again any time soon.
I feel like that move really should have been a major version bump, it was a big move across the board where many aspects of functionality changed or were removed, UI locations and naming completely changed, keyboard shortcuts and UX changes across the board, and so on.
Minor bump 2.7 -> 2.8 = everything breaks, your workflow no longer functions, you have to relearn the API, online resources and documentation no longer relevant for many aspects of the editor
Major bump 2.8/9 -> 3.0 = everything is compatible with 2.8?? Just feels like 2.8/2.9 and what's referenced in the blog post should have been version 3 to me, but maybe they had some technical reason regarding the backend and scripting APIs?
You're right. I'm so annoyed by firefox changing its interface every once in a while instead of coming up with a good one that they can actually keep stable for years...
Blender is such an amazing tool. I have a side business where I create 3d printed jewelry with my customer's fingerprints on it in gold and silver (https://lulimjewelry.com) and I use Blender on the backend for all of the jewelry creation.
I run blender headless in a docker container on google cloud run. When needed I invoke it with an image and have a blender script "engrave" that image on the jewelry and output an STL file.
It is incredibly flexible to script in python, although its not very "pythonic". The UI is quite stateful (edit mode, object mode, which items are selected, etc) and you have to keep track of that state in your program. But once you get around those issues you can do quite a lot, and its all a free program!
Blender is amazing for the 3D rendering it can do, and for the fact that it is free... But it is intimidating to people who don't learn it inside and out from a production standpoint when they are doing a lot more than just importing simple objects.
But I kind of wish they had a "Blender Light", without all the features and config options, and with a less complex UI... I've been using Panzoid to do certain things, but Panzoid can't do rigging on imported object files...
I usually want to make animated videos in support of the music I release on my label, but right now doing so is either expensive or time consuming. I also don't want to put in film-studio effort or money into each music video release, because that is not a good business model, and my time is limited... The costs of being a creator are rising fast, only solid workflows will ensure survival.
You might be interested in the "Blender 101 (Application templates)" item in this roadmap:
> Being able to configure Blender for personal workflows is a key Blender target, part of the 2.8 project.
> To bring configuring even one step further, .blend files can be used to replace the standard startup file – creating ‘custom versions’ of Blender this way. These templates will allow to completely configure UI layouts, define custom keymaps and enable add-ons to run at startup. A smart Python scripter can design complete new applications this way, using vanilla Blender releases. As prototypes the team will release a couple of simple templates: a video-player review tool and the 100% dumbed down ‘Monkey Blender’.
> A template used this way will be called “Blender Application Template” or just “Blender App”. Planning is to have this working as beta for Blender 3.1, after extensive testing and reviews by contributors and module teams.
I think the bigger issue you're going to run into with "make Blender but simple" is that the subset of features that you want in a simplified blender is a different subset from what other people want. You want to do rigging on imported object files, but for somebody else rigging is a feature that would get cut out completely. They're just trying to model a doughnut and a coffee mug and do a still render of it, or at most animate the camera moving around the scene.
The added value of complex software like Blender is that it makes every task possible, reducing the risk of having to start over a project with better tools because easy to learn software with a constrained workflow and limited features was selected at first. 3D modeling and rendering is particularly suitable for this style of tool because there are many, many editing operations that can be applied to 3D models and many uses they can be put to.
Simplified software is suitable for simple, throwaway needs where the risk of choosing wrong and the cost of changing tools are low: yesterday I had to split a PDF file for the first and presumably last time in my life and I just printed it to a file by page ranges, without bothering to select, install and learn to use a PDF editor like Acrobat.
Mastering general, not "light" software is the only practical foundation for relatively professional "solid workflows" (as opposed to learning the basics of 3D modeling or something else with minimum accidental complexity); there might be a place for very advanced and/or very efficient but very specialized tools (e.g. MagicaVoxel, procedural generators of 3D models, Substance Painter) but only as an addition for equally specialized situations, not as an easy route.
> But I kind of wish they had a "Blender Light", without all the features and config options, and with a less complex UI
I get what you're talking about, sort of an "iMovie" to Blender's "Final Cut Pro". I think 2.80 actually did a ton of work in that regard, if you've seen it since then. It could definitely be simpler though, and with the UI programming the way it is I suspect it wouldn't take all that much internal change.
Unfortunately, I think a lot of the reason it's as intimidating as it is is because the main aim for buy in for now is studios who favor the power user maximal interfaces of Cinema 4D and Maya and the like
Since Blender is open source, it would be possible for some other group to fork Blender and remove parts of the UI for users like you.
I really wish the core team wouldn't focus their effort on this. Blender is a professional tool for professional users. Having to balance between "dumbing down" the UI for first time users and providing a user interface for power users makes it hard to please either one of them, so I'm happy Blender currently focuses on the pro users.
Totally agree. Some time ago I also ran Blender headless with a script to render 'product shots' of a product that is available in over hundred different combinations. What is also very nice is that you can turn on API hints when you navigate over an element in the GUI. So you can quickly learn how to access or manipulate data.
That's a really cool application of Blender! I'm also using Blender headless in docker containers on Google Cloud, though not using Cloud Run. I'm doing it for self-service molecular visualizations -- "create your own molecule video". It's at https://bioviz-studio.com. But I run up to 50 dual-T4 instances for speed (Cycles X helps a lot!). And I agree the python API is a bit weird, and hard to use, but once you get the hang of it you can create really complex and interesting scenes just using python.
When I started this I spent some time googling to see what sort of harm came come from your fingerprints being leaked and I couldn’t find anything concrete. It’s not trivial to take a photo of a fingerprint and turn it into something capable of fooling a smartphone fingerprint reader. Someone with the ability to do that probably also have the ability to lift your print from the thousands of things you touch in your daily life.
And the metal is actually not 3D printed. How it works is the design is printed in wax first, this wax is then used to make a plaster mould into which the molten metal is cast.
The rings are printed in wax and then a plaster mould is made using the wax. The gold or silver is cast using the plaster mould.
I don’t do any of it myself, I have a casting partner who can do it for me. You can also find online services like shapeways to do it as well but they charge a lot mire than the real casting houses. Some casting houses I’ve worked with were the back ends to shapeways themselves.
Great to see they're working on Metal support for the viewport (and Vulkan, of course). Apple took their sweet time coming onboard as a sponsor, but now that they have I hope it will give the developers all the support they need to work out the issues.
I have a macbook air with M1 and blender is extremely fast (emulated x86). If only macs had sensible keyboards so the hotkeys would be easier (not a mac fan, just got it for free). Highly recommend a Numpad though it is a pinnacle of input technology.
But even older hardware can use blender without even needing a good GPU as render previews have become quite fast. If you create models for real-time rendering, an older PC without some super GPU is quite enough to work with.
Not quite what you're after, but Blender in sidecar works really, really well for pen-based workflows. I can sculpt and texture paint with the Apple Pencil and it all just... works.
If there was a single feature I could see added to Blender, it would be to share more functionality between objects and collections. Blender has an organizational concept called objects, which are containers with position, scale, rotation, and they contain within them the mesh data. These can be warped, arrayed, cast, decimated, etc, through the use of non destructive modifiers.
Now, these objects can be combined into a higher level organizational container called a collection. These, too, have a position, scale, rotation, etc, and behave much the same way as objects. They can even be combined into parent collections.
Now, what I would love would be to have them gain much of the functionality of objects that they don't currently have, most importantly modifiers.
Say I was building a well. I make a single brick, an object. I can array this brick object to create a wall, and I can add a corner to the wall, and add it all to a collection. Now I have this wall "object," and maybe I want to array it 8x around a center point in order to create an octagonal well. I can't, because I can't place modifiers on collections. For all intents and purposes, it is just another object, and would love to see any push towards allowing it to behave as such.
Collections and objects have fundamentally different purposes. An object's job is to anchor a bit of geometry information (e.g. a mesh) in the scene. Collections serve as hierarchical grouping and instancing. Instancing is specifically intended to not produce deep copies of objects. This is essential for creating some types of really complex scenes. Any modifier - destructive or otherwise - requires one or more deep copies to act on. Providing modifiers on collections isn't good UX because it blurs the line between what is and isn't an instance.
Yes, except that you CAN instance collections. You can do linked imports of collections and you can do linked duplicates of collections. And I understand the difference conceptually, but what I'm saying is that the distinction always ends up being arbitrary to me from and actual usage point of view. If I've made a wall or a tower out of mesh data directly, or a collection of objects, it makes no difference to me in regards to my intentions. I have a wall that I would like to instance around my scenes, and I have all the same motivations for wanting to use non-destructive workflow as I would were it an object. If I were to have to combine my collection into an object, it defeats the purpose.
Say I was building a scene like King's Landing. For each building, I'd like to have a file representing that building, that I can instance around a scene representing, say, a city block. I'll do a linked import of the building, so any changes I do to the building will take effect around the scene, and I'd like to make use of arrays and other modifiers for ad hoc instance changes. Again, my motivations for wanting a non destructive workflow are completely the same, regardless of whether that building is an object or a collection.
Now, as of now, my only option is for it to be an object. However, what if I want to use a collection of other objects in order to build the building? I want a cellar door that I'd like to have instanced. I'd like a ladder outside, and a few barrels on the second story. I want these instanced for all the same reasons. My preference, of course, would be for the building to be a collection of all these things, but if I want to then treat my building as one of these such objects at the next higher level of detail, I can't.
So you're dismissive of my intentions by insisting that the difference is meaningful, whereas I simply see it as a failure of abstraction. Objects and collections are ultimately both worldspace coordinates holding mesh data, the only difference is if they're nested at all. And like I said, collections CAN be nested. That relationship is already established. Collections CAN be instanced, that use case is already established. It's really just the modifiers that are missing. And from a programming perspective, the logic of "get the mesh data from the object I've been assigned to in order to apply this transformation" is the only thing that needs to be tweaked. Rather than going one level deep, it will say "and if this is mesh data, stop, else iterate."
Have you tried geometry nodes? It lets you use meshes from any object, allowing you to build up modifiers non linearly. I would say it covers 90% of the feature you're proposing.
You basically want Cinema 4D mograph. Unfortunately, Blender does not do that at all, unless you use Animation nodes addon, which works, but it's an addon.
There's a major update coming to the geometry nodes feature in 3.0. It does a lot of the same stuff as animation nodes and even has the same lead developer.
I have, they're super useful. They basically illustrate that what I'm talking about is entirely feasible. In the same way that you can non destructively combine object mesh data into higher order objects, this functionality should be available in the object hierarchy graph.
I do understand that this a roadmap for the 3.x series and not for the very first 3.0 release, but oh boy, is that a lot!
Improvements across the board. Is there even a subsystem that doesn't have big plans? Apart from physics, and that seems to be subsumed into Everything Nodes.
That sounds like a great idea, but I'm not sure how "framework-able" a great UI it is.
I was wondering how you might apply a Blender GUI layer to GIMP (I tolerate GIMP, but I love Blender). I imagine the issues run deeper than just the surface level UI? It would be a great experiment though!
Agree. Even yesterday I was praising Blender's node editor and I'm encouraging the developers to mimic its functionalities in a product we are building.
I especially like that they plan to decouple Time from Frames. That could lead to extremely cool ways to deal with time in the future (e g. automating speed changes via keyframes to slow down or speed up time globally.
For 2.8, all the Blender icons were replaced with monochromatic ones. This is a very popular trend and a lot of programs are replacing their icons in this way, so it's obviously fine for most people, and I realize this is probably a niche accessibility need I have.
But, to use the new icons, I find I have to check each one every time to find the one I want. Instant recognition is no longer possible. For me, this extra cognitive load makes it difficult to use Blender for more than a few minutes at a time.
Blender is a fantastic product and one of the best examples of what open-source can be, but I for one will be appreciating it from afar, and remaining on 2.7 until there is a way to get more usable icons.
It's not a niche accessibility need, it's a universal accessibility need that's been commonly understood for decades. Insufficient differentiation is one of the factors that originally drove the increase of colour count (and later, as display hardware allowed it, resolution) in icons 30-ish years ago.
This trend is not driven by universal preference for monochromatic icons but by the cargo cult that UX has become.
Blender has always been very keyboard driven, but I don't use it often enough to have most of the shortcuts memorized. Even those seem to have changed, though. I'm sure there is a setting in the preferences to restore the old shortcuts, but it doesn't really feel like an appropriate thing to do with such a complex piece of software.
I know Blender has a "reputation" for being difficult to use. It used to be earned, over a decade ago, and it was indeed an impediment to getting new users. I felt like--as even just a casual user--Blender had hit a good sweet spot of organized, predictable, and powerful. It's just the very nature of 3D modelling to be complex, and it feels like most complaints about its interface these days are just from completely new people who don't understand this.
That seems to be one of the dangers of Free Software. Having no monetary barrier to entry, you get a LOT more first-time users. Without the sunk cost of having spent several thousand dollars on 3DS Max or similar, they don't have an emotional incentive to stick through the learning curve yet.
GIMP did this too and it absolutely blows. Thankfully you can revert back to the "legacy" icons.
Deleted Comment
Deleted Comment
Oh boy. How I wish so much that other software developers would follow this principle... It seems nearly every software I use and rely on has to change its appearance and interface every 6-12 months, breaking familiarity for no objective reason, and simply because "it looks better" to look at (and not necessarily to use!) to the subjective eyes of someone.
Minor bump 2.7 -> 2.8 = everything breaks, your workflow no longer functions, you have to relearn the API, online resources and documentation no longer relevant for many aspects of the editor
Major bump 2.8/9 -> 3.0 = everything is compatible with 2.8?? Just feels like 2.8/2.9 and what's referenced in the blog post should have been version 3 to me, but maybe they had some technical reason regarding the backend and scripting APIs?
Deleted Comment
I run blender headless in a docker container on google cloud run. When needed I invoke it with an image and have a blender script "engrave" that image on the jewelry and output an STL file.
It is incredibly flexible to script in python, although its not very "pythonic". The UI is quite stateful (edit mode, object mode, which items are selected, etc) and you have to keep track of that state in your program. But once you get around those issues you can do quite a lot, and its all a free program!
There's a plotline for an episode of CSI crying out to be written here..
But I kind of wish they had a "Blender Light", without all the features and config options, and with a less complex UI... I've been using Panzoid to do certain things, but Panzoid can't do rigging on imported object files...
I usually want to make animated videos in support of the music I release on my label, but right now doing so is either expensive or time consuming. I also don't want to put in film-studio effort or money into each music video release, because that is not a good business model, and my time is limited... The costs of being a creator are rising fast, only solid workflows will ensure survival.
> Being able to configure Blender for personal workflows is a key Blender target, part of the 2.8 project.
> To bring configuring even one step further, .blend files can be used to replace the standard startup file – creating ‘custom versions’ of Blender this way. These templates will allow to completely configure UI layouts, define custom keymaps and enable add-ons to run at startup. A smart Python scripter can design complete new applications this way, using vanilla Blender releases. As prototypes the team will release a couple of simple templates: a video-player review tool and the 100% dumbed down ‘Monkey Blender’.
> A template used this way will be called “Blender Application Template” or just “Blender App”. Planning is to have this working as beta for Blender 3.1, after extensive testing and reviews by contributors and module teams.
I think the bigger issue you're going to run into with "make Blender but simple" is that the subset of features that you want in a simplified blender is a different subset from what other people want. You want to do rigging on imported object files, but for somebody else rigging is a feature that would get cut out completely. They're just trying to model a doughnut and a coffee mug and do a still render of it, or at most animate the camera moving around the scene.
Simplified software is suitable for simple, throwaway needs where the risk of choosing wrong and the cost of changing tools are low: yesterday I had to split a PDF file for the first and presumably last time in my life and I just printed it to a file by page ranges, without bothering to select, install and learn to use a PDF editor like Acrobat.
Mastering general, not "light" software is the only practical foundation for relatively professional "solid workflows" (as opposed to learning the basics of 3D modeling or something else with minimum accidental complexity); there might be a place for very advanced and/or very efficient but very specialized tools (e.g. MagicaVoxel, procedural generators of 3D models, Substance Painter) but only as an addition for equally specialized situations, not as an easy route.
I get what you're talking about, sort of an "iMovie" to Blender's "Final Cut Pro". I think 2.80 actually did a ton of work in that regard, if you've seen it since then. It could definitely be simpler though, and with the UI programming the way it is I suspect it wouldn't take all that much internal change.
Unfortunately, I think a lot of the reason it's as intimidating as it is is because the main aim for buy in for now is studios who favor the power user maximal interfaces of Cinema 4D and Maya and the like
Since Blender is open source, it would be possible for some other group to fork Blender and remove parts of the UI for users like you.
I really wish the core team wouldn't focus their effort on this. Blender is a professional tool for professional users. Having to balance between "dumbing down" the UI for first time users and providing a user interface for power users makes it hard to please either one of them, so I'm happy Blender currently focuses on the pro users.
I know rendering can be quite heavy on a CPU, but it sounds like you're running a series of commands to generate a model instead
And I dont actually do any rendering on this instance, just the STL generation. I use Threejs to display/render the STL once its done.
And you get to build a database for any state agencies that might be interested in buying (or taking)!
As opposed to building a database for private agencies?
And the metal is actually not 3D printed. How it works is the design is printed in wax first, this wax is then used to make a plaster mould into which the molten metal is cast.
I don’t do any of it myself, I have a casting partner who can do it for me. You can also find online services like shapeways to do it as well but they charge a lot mire than the real casting houses. Some casting houses I’ve worked with were the back ends to shapeways themselves.
But even older hardware can use blender without even needing a good GPU as render previews have become quite fast. If you create models for real-time rendering, an older PC without some super GPU is quite enough to work with.
Now, these objects can be combined into a higher level organizational container called a collection. These, too, have a position, scale, rotation, etc, and behave much the same way as objects. They can even be combined into parent collections.
Now, what I would love would be to have them gain much of the functionality of objects that they don't currently have, most importantly modifiers.
Say I was building a well. I make a single brick, an object. I can array this brick object to create a wall, and I can add a corner to the wall, and add it all to a collection. Now I have this wall "object," and maybe I want to array it 8x around a center point in order to create an octagonal well. I can't, because I can't place modifiers on collections. For all intents and purposes, it is just another object, and would love to see any push towards allowing it to behave as such.
Deleted Comment
Say I was building a scene like King's Landing. For each building, I'd like to have a file representing that building, that I can instance around a scene representing, say, a city block. I'll do a linked import of the building, so any changes I do to the building will take effect around the scene, and I'd like to make use of arrays and other modifiers for ad hoc instance changes. Again, my motivations for wanting a non destructive workflow are completely the same, regardless of whether that building is an object or a collection.
Now, as of now, my only option is for it to be an object. However, what if I want to use a collection of other objects in order to build the building? I want a cellar door that I'd like to have instanced. I'd like a ladder outside, and a few barrels on the second story. I want these instanced for all the same reasons. My preference, of course, would be for the building to be a collection of all these things, but if I want to then treat my building as one of these such objects at the next higher level of detail, I can't.
So you're dismissive of my intentions by insisting that the difference is meaningful, whereas I simply see it as a failure of abstraction. Objects and collections are ultimately both worldspace coordinates holding mesh data, the only difference is if they're nested at all. And like I said, collections CAN be nested. That relationship is already established. Collections CAN be instanced, that use case is already established. It's really just the modifiers that are missing. And from a programming perspective, the logic of "get the mesh data from the object I've been assigned to in order to apply this transformation" is the only thing that needs to be tweaked. Rather than going one level deep, it will say "and if this is mesh data, stop, else iterate."
Improvements across the board. Is there even a subsystem that doesn't have big plans? Apart from physics, and that seems to be subsumed into Everything Nodes.
I was wondering how you might apply a Blender GUI layer to GIMP (I tolerate GIMP, but I love Blender). I imagine the issues run deeper than just the surface level UI? It would be a great experiment though!