Readit News logoReadit News
SeasonalEnnui · 5 months ago
The thing I enjoy most about C# is the depth/levels of progressive enhancement you can do.

Let's say in the first instance, you write a proof of concept algorithm using basic concepts like List<T>, foreach, stream writing. Accessible to a beginner, safe code, but it'll churn memory (which is GC'd) and run using scalar CPU instructions.

Depending on your requirements you can then progressively enhance the memory churn, or the processing speed:

for(;;), async, LINQ, T[], ArrayPool<T>, Span<T>, NativeMemory.Alloc, Parallel.For, Vector<T>, Vector256<T>, System.Runtime.Intrinsics.

Eventually getting to a point where it's nearly the same as the best C code you could write, with no memory churn (or stop-the-world GC), and SIMD over all CPU cores for blisteringly fast performance, whilst keeping the all/most of the safety.

I think these new language features have the same virtue - I can opt into them later, and intellisense/analysers will optionally make me aware that they exist.

opticfluorine · 5 months ago
I have occasionally, just for fun, written benchmarks for some algorithm in C++ and an equivalent C# implementation, them tried to bring the managed performance in line with native using the methods you mention and others. I'm always surprised by how often I can match the performance of the unmanaged code (even when I'm trying to optimize my C++ to the limit) while still ending up with readable and maintainable C#.
iamflimflam1 · 5 months ago
JIT compilers can outperform statically compiled code by analysing at run time exactly what branches are taken and then optimising based on that.
panzi · 5 months ago
Does this include the GC at the end of it all? Because if that happens after the end timestamp it's not an exact comparison. I read something once about speeding up a C/C++ compiler by simply turning free into a no-op. Such a compiler basically allocates more and more data and only frees it all at the end of execution, so then doing all the free calls is just wasted CPU cycles.
throw-the-towel · 5 months ago
Could you please share some benchmark code? It would be incredibly useful as a learning aid!
pjmlp · 5 months ago
That is the promise we could already have had in the 1990's with languages like Eiffel, Oberon and Modula-3, and it has taken us about 30 years to finally become mainstream.

C# is not the only one offering these kind of capabilities, still big kudos to the team, and the .NET performance improvements blog posts are a pleasure to read.

Deleted Comment

ivankahl · 5 months ago
Thanks for the upvotes! While testing and writing about the feature, I suspected it would receive mixed reactions.

The `?.` operator behaves similarly on the LHS to the RHS, making the language more consistent, which is always a good thing. In terms of readability, I would say that once you understand how the operator works (which is intuitive because the language already supports it on the RHS), it becomes more readable than wrapping conditionals in `if` statements.

There are downsides, such as the overuse I mentioned. But this is true for many other language features: it requires experience to know when to use a feature appropriately, rather than applying it everywhere.

However, the great thing about this particular enhancement is that it's mostly cosmetic. Nothing prevents teams from not adopting it; the old syntax still works and can be enforced. C# and .NET are incredibly versatile, which means code can look dramatically different depending on its context and domain. For some projects, this feature might not be needed at all. But many codebases do end up with various conditional assignments, and in those cases, this can be useful.

Archelaos · 5 months ago
I have long desired such a language feature. It is a great addition to the language, because a single such expression helps to avoid potential bugs caused by mismatching double references in the conditional test and the execution statement of the traditional version, especially when refactoring longer code blocks.

For example, if we have something like this:

    if (config?.Settings is not null) 
    {
        ... Multiple lines that modify other settings.
        config.Settings.RetryPolicy = new ExponentialBackoffRetryPolicy();
    }
and we introduce another category SpecialSettings, we need to split one code block into two and manually place each line in the correct code block:

    if (config?.Settings is not null) 
    {
        ... Multiple lines that modify other (normal) settings.
    }
    if (config?.SpecialSettings is not null) 
    {
        ... Multiple lines that modify other special settings.
        config.SpecialSettings.RetryPolicy = new ExponentialBackoffRetryPolicy();
    }
With the new language feature the modification is easy and concise:

    config.Settings?.RetryPolicy = new ExponentialBackoffRetryPolicy();
becomes:

    config.SpecialSettings?.RetryPolicy = new ExponentialBackoffRetryPolicy();
and can be made for any other special setting in place, without the need to group them.

Furthermore, I find the "Don't Overuse It" section of the article somewhat misleading. All the issues mentioned with regard to

    customer?.Orders?.FirstOrDefault()?.OrderNumber = GenerateNewOrderNumber();
would apply to the traditional version as well:

    if (customer is not null)
    {
        if (customer.Orders is not null)
        {
            if (customer.Orders.FirstOrDefault() is not null)
            {
                customer.Orders.FirstOrDefault().OrderNumber = GenerateNewOrderNumber();     
            }        
        }
    }
or:

    if (customer is not null)
    {
        var orders = customer.Orders;
        if (orders is not null)
        {
            var firstOrder = customer.Orders.FirstOrDefault();
            if (firstOrder is not null)
            {
                firstOrder.OrderNumber = GenerateNewOrderNumber();     
            }        
        }
    }
If it really were a bug, when customer is null here, etc., then it would of course make sense to guard the code as detailed as described in the article. However, this is not a specific issue of the new language feature. Or to put it more bluntly:

   customer?.Orders?.FirstOrDefault()?.OrderNumber = GenerateNewOrderNumber();
is no replacement for

   customer.Orders.First().OrderNumber = GenerateNewOrderNumber();
were we want an exception on null.

BTW, with the new version, we can also make the code even clearer by placing each element on its own line:

    customer?
    .Orders?
    .FirstOrDefault()?
    .OrderNumber = 
        GenerateNewOrderNumber();

bandyaboot · 5 months ago
I’m having a hard time imagining where this is useful. If I’m trying to assign to a property, but encounter an intermediate null value in the access chain, just skipping the assignment is almost never going to be what I want to do. I’m going to want to initialize that null value.
Metasyntactic · 5 months ago
Hi there, one of the lang designers here :)

Think of it this way. We already supported these semantics in existing syntax through things like invocations (which are freely allowed to mutate/write). So `x?.SetSomething(e1)`. We want properties to feel and behave similarly to methods (after all, they're just methods under the covers), but these sorts of deviations end up making that not the case.

In this situation, we felt like we were actually reducing concept count by removing yet another way that properties don't compose as well with other language features as something like invocation calls do.

Note: when we make these features we do do an examination of the ecosystem and we can see how useful the feature would be. We also are community continuously with our community and seeing just how desirable such a feature is. This goes beyond just those who participate on the open source design site. But also tons of private partners, as well as tens of thousands of developers participating at our conferences and other events.

This feature had been a continued thorn for many, and we received continuous feedback in the decade since `?.` was introduced about this. We are very cautious on adding features. But in this case, given the continued feedback, positive reception from huge swaths of the ecosystem, minimal costs, lowered complexity, and increased consistency in the language, this felt like a very reasonable change to make.

Thanks!

moogly · 5 months ago
I'm also not sure I have a lot of code where this would be useful, but adding it to the language I don't feel makes it worse in any way; in fact, it makes it more consistent since you can do conditional null reads and conditional null method invocations (w/ `?.Invoke()`), so why not writes too.
notTooFarGone · 5 months ago
Adding something is always gonna make things worse by default and has to be proven to be useful. Otherwise you have bloat and "yet another way of doing the one thing".

I'm a fan of this notation because it's consistent but language design should not just add features because it doesn't hurt.

layer8 · 5 months ago
“Why not?” is never a good-enough reason to add a new language feature.

If it’s rarely used, people may misinterpret whether the RHS is evaluated or not when the LHS doesn’t exist (I don’t actually know which it is).

Optional operations and missing properties often require subtle consideration of how to handle them. You don’t want to make it too easy to say “whatever”.

Quarrelsome · 5 months ago
improving crappy codebases without breaking anything. Bad .NET developers are forever doing null checks because they write weird and unreliable code. So if you have to fix up some pile of rotting code, it can help you slowly iterate towards something more sane over time.

For example in my last gig, the original devs didn't understand typing, so they were forever writing typing code at low levels to check types (with marker interfaces) to basically implement classes outside of the classes. Then of course there was lots of setting of mutable state outside of constructors, so basically null was always in play at any moment at any time.

I would have loved this feature while working for them, but alas; they were still on 4.8.1 and refused to allow me to upgrade the codebase to .net core, so it wouldn't have helped anyway.

rahkiin · 5 months ago
These null checks are actually for Optionals in the type system. The whole standard library and many better packages use nullability and thus indicate what can and cannot be null ever. And structs can never be null.

So no, c# are not constantly null-checking more than in Rust

zdragnar · 5 months ago
Unfortunately, I suspect this will just makes it easier to keep writing sloppy code.

Deleted Comment

Deleted Comment

Deleted Comment

mkoubaa · 5 months ago
Monad-maxxing has ruined many a language
chowells · 5 months ago
This is a functor, not a monad. Also, it's implemented really poorly. If only more languages actually implemented monads well. You wouldn't need special case junk like this.
rkagerer · 5 months ago
More concise? Yes.

More readable? I'm less convinced on that one.

Some of those edge cases and their effects can get pretty nuanced. I fear this will get overused exactly as the article warns, and I'm going to see bloody questions marks all over codebases. I hope in time the mental overhead to interpret exactly what they're doing will become muscle memory...

Metasyntactic · 5 months ago
Hi there! Lang designer here.

> More concise? Yes.

Note: being more concise is not really the goal of the `?` features. The goal is actually to be more correct and clear. A core problem these features help avoid is the unfortunate situation people need to be in with null checks where they either do:

    if (some_expr != null)
        someExpr...
Or, the more correct, but much more unweildy:

    var temp = some_expr;
    if (temp != null)
        temp...
`?` allows the collapsing of all the concepts together. The computation is only performed once, and the check and subsequent operation on it only happens when it is non-null.

Note that this is not a speculative concern. Codebases have shipped with real bugs because people opted for the former form versus the latter.

Our goal is to make it so that the most correct form should feel the nicest to write and maintain. Some languages opt for making the user continuously write out verbose patterns over and over again to do this, but we actually view that as a negative (you are welcome to disagree of course). We think forcing users into unweildy patterns everywhere ends up increasing the noise of the program and decreasing the signal. Having common patterns fade away, and now be more correct (and often more performant) is what we as primary purposes of the language in the first place.

Thanks!

saberience · 5 months ago
As a really long term C# engineer, I feel quite strongly that C# has become a harder and harder language over time, with a massive over abundance of different ways of doing the same thing, tons of new syntactic sugar, so 5 different devs can write five different ways of doing the same thing, even if it's a really simple thing!

At this point, even though I've been doing .net since version 2, I get confused with what null checks I should be doing and what is the new "right" and best syntax. It's kind of becoming a huge fucking mess, in my opinion anyway.

If you want a kind of proof of this, see this documentation which requires 1000s of words to try and explain how to do null/nullable: https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...

Do you think most C# devs really understand and follow this entire (complex and verbose) article?

rkagerer · 5 months ago
Thanks for stopping by to comment!

I'd love to see some good examples of those bugs you referred to, in order to get some more context.

Is the intent of the second form to evaluate only once, and cache that answer to avoid re-evaluating some_expr?

When some_expr is a simple variable, I didn't think there was any difference between the two forms, and always thought the first form was canonical. It's what I've seen in codebases forever, going all the way back to C, and it's always been very clear.

When some_expr is more complex, i.e. difficult to compute or mutable in my timeframe of interest, I'm naturally inclined to the second form. I've personally found that case less common (eg. how exactly are you using nulls such that you have to bury them so deep down, and is it possible you're over-using nullable types?).

I appreciate what you're saying about nudging developers to the most correct pattern and letting the noise fade away. I always felt C# struck a good balance with that, although as the language evolved it feels like there's been a growing risk of "too many different right ways" to do things.

Btw while you're here, I understand why prefix increment/decrement could get complicated and why it isn't supported, but being forced to do car.Wheel?.Skids += 1 instead of car.Wheel?.Skids++ also feels odd.

larusso · 5 months ago
When the first wave of null check operators came out our code bases filled up with ? operators. I luckily had used the operator in swift and rust to somewhat know what it can do and what not. Worse the fact that unlike rust the ? operator only works on null. So people started to use null as an optional value. And I think that is at the core the problem of the feature. C# is not advertising or using this themselves in this way. I think the nullable checks etc are great way to keep NPE under control. But they can promote lazy programming as well. In code reviews more often than not the question comes up, when somebody is using ? either as operator or as nullable type like ‘string?’, are you sure the value should be nullable? And why are you hiding a bug with a conditional access when the value should never be null in the first place.
DimmieMan · 5 months ago
And more better? I'm not sure either.

In all these examples I feel something must be very wrong with the data model if you're conditionally assigning 3 levels down.

At least the previous syntax the annoyingness to write it might prompt you to fix it, and it's clear when you're reading it that something ain't right. Now there's a cute syntax to cover it up and pretend everything is okay.

If you start seeing question marks all over the codebase most of us are going to stop transpiling them in our head and start subconsciously filtering them out and miss a lot of stupid mistakes too.

estimator7292 · 5 months ago
This is something I see in newbie or extremely lazy code. You have some nested object without a sane constructor and you have to conditionally construct a list three levels down.

This is a fantastic way to make such nasty behavior easier.

And agreed on the question mark fatigue. This happened to a project in my last job. Because nullable types were disabled, everything had question marks because you can't just wish away null values. So we all became blind and several nullref exceptions persisted for far too long.

I'm not convinced this is any better.

monocularvision · 5 months ago
Swift has had this from the beginning, and it doesn’t seem to have been a problem.
arwhatever · 5 months ago
What?.could?.possibly?.go?.wrong?.
esafak · 5 months ago

    if (This) {
        if (is) {
            if (much) {
                if (better) {
                    println("I get paid by the brace")
                }
            }
        }
    }

kazinator · 5 months ago
Nothing to worry about:

  What?.could?.possibly?.go?.wrong?
Not so convinced:

  What?.could?.possibly?.go?.wrong = important_value()
Maybe the design is wrong if the code is asked to store values into an incomplete skeleton, and it's just okay to discard them in that case.

h4x0rr · 5 months ago
Oh come on just learn it properly it's not a big deal to read it
peterashford · 5 months ago
I'm a Java fan so I'm contractually required to dis c#, but actually I kinda like this. It reduces boilerplate. Yes, it could be abused but this is what code review is for.
moomin · 5 months ago
You’re not wrong. Every language feature that gets added there’s someone who wants to stop the clock and hold the language definition in place because “people might misuse it” or “people might not be familiar with it”. It’s not language specific, it’s everywhere.
ffsm8 · 5 months ago
Still, enabling ?. Access on the left side of the equals (assigning) feels like a serious anti pattern to me

I struggle to even see how anyone would prefer that over an explicit if before assigning.

Having that on the right side (attribute reference) is great, but that was already available as far as I understood the post...

pjmlp · 5 months ago
Why the requirement, because of J++ and how Ext-VOS alongside Cool became .NET?

Most companies don't care about this kind of stuff.

I work across Java, C#, JS/TS, C++, SQL, and whatever else might be needed, even stuff like Go and C, that I routinely criticise, because there is my opinion, and then there is the job market, and I rather pay my bills.

cemdervis · 5 months ago
Reminds of something I read somewhere: "I don't love Java, but I love the house it bought me."
peterashford · 5 months ago
Yeah, I work with lots of languages too. Including c#, as it happens.

It was a joke.

maltalex · 5 months ago
Cute, but is this actually needed? It's one more thing to remember, one more thing to know the subtleties of, and for what? To save writing a very readable and unambiguous line of code?

It feels like the C# designers have a hard time saying "no" to ideas coming their way. It's one of my biggest annoyances with this otherwise nice language. At this point, C# has over 120 keywords (incl. contextual ones) [0]. This is almost twice as much as Java (68) [1], and 5 times as much as Go (25) [2]. And for what? We're trading brevity for complexity.

[0]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref... keywords/

[1]: https://en.wikipedia.org/wiki/List_of_Java_keywords

[2]: https://go.dev/ref/spec#Keywords

Metasyntactic · 5 months ago
> Cute, but is this actually needed? It's one more thing to remember, one more thing to know the subtleties of, and for what?

Hi there! C# language designer here :-)

In this case, it's more that this feature made the language more uniform. We've had `?.` for more than 10 years now, and it worked properly for most expressions except assignment.

During that time we got a large amount of feedback from users asking for this, and we commonly ran into it ourselves. At a language and impl level, these were both very easy to add in, so this was a low cost Qol feature that just made things nicer and more consistent.

> It feels like the C# designers have a hard time saying "no" to ideas coming their way.

We say no to more than 99% of requests.

> We're trading brevity for complexity

There's no new keyword here. And this makes usage and processing of `?.` more uniform and consistent. Imo, that is a good thing. You have less complexity that way.

klysm · 5 months ago
Thank you for all the hard work on C#! I’ve been loving the past 5 years of developments and don’t agree with the parent comment here.

p.s. I will take the opportunity to say that I dream of the day when C# gets bounded sum types with compiler enforced exhaustive pattern matching. It feels like we are soooo close with records and switch expression, but just missing one or two pieces to make it work.

maltalex · 5 months ago
Thanks for the reply and for your work on C#.

I should have been clearer in my message. This specific feature is nice, and the semantics are straightforward. My message is more about some the myriad of other language features with questionable benefits. There's simply more and more "stuff" in the language and a growing number of ways to write the same logic but often with subtle semantic differences between each variant. There are just too many different, often overlapping, concepts. The number of keywords is a symptom of that.

ygra · 5 months ago
I stumbled over this a few times, mostly when cleaning up older code. This basically just means that using the ?. member access no longer dictates what is possible on the right side.

Property reads were fine before (returning null if a part of the chain was null), method invocations were fine (either returning null or just being a no-op if a part of the chain was null). But assignments were not, despite syntactically every ?. being basically an if statement, preventing the right side from executing if the left side is null (yes, that includes side-effects from nested expressions, like arguments to invocations).

So this is not exactly a new feature, it just removes a gotcha from an old one and ensures we can use ?. in more places where it previously may have been useful, but could not be used legally due to syntax reasons.

zigzag312 · 5 months ago
I don't get this argument as it really doesn't match my practical experience. Using new C# features, the code I write is both easier to read and easier to write. On top of that it's less error prone.

C# is also much more flexible than languages you compared it to. In bunch of scenarios where you would need to add a second language to the stack, you could with C# still use just one language, which reduces complexity significantly.

buybackoff · 5 months ago
> is this actually needed

Yes, actually. I did write it multiple times naturally only to realize it was not supported yet. The pattern is very intuitive.

alkonaut · 5 months ago
Yes, this doesn't actually add anything to the "size" of the language, if anything it actually shrinks it. It's existing syntax (the ? and ?? operators) and existing semantics. The only thing was that it worked in half the cases, only reads but not writes. Now this completes the functionality so it works everywhere.

You can argue that C# gets a lot of new features that are hard to keep up with, but I wouldn't agree this is one of them. This actually _reduces_ the "mental size" of C#.

SideburnsOfDoom · 5 months ago
> his actually _reduces_ the "mental size" of C#

IDK, if you read

  Settings?.RetryPolicy = new ExponentialBackoffRetryPolicy();

as "there is a now a ExponentialBackoffRetryPolicy" then you could be caught out when there isn't. That one ? char can be ignored .. unless it can't. It's another case where "just because it compiles and runs doesn't mean that it does the thing".

This to me is another thing to keep track of. i.e. an increase in the size of the mental map needed to understand the code.

bob1029 · 5 months ago
Nothing is stopping you from constraining the language version you want to be used in your projects:

https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

You can force it all the way down to ISO-1/2.

If this is still insufficient, then I question what your goals actually are. Other people using newer versions of C# on their projects shouldn't be a concern of yours.

nothrabannosir · 5 months ago
Oddly antagonistic take on a reasonable comment. GP could be working together with other people, for example, in which case every such idiosyncratic configuration introduces a little social and mental friction. This is covered at length in similar conversations in Go threads, where people say things like “defaults matter”.

Obviously you’re not alone to disagree, and there are even some good arguments you could potentially be making. But to say “I question what your motives really are” and tell someone what they should be concerned with is… odd?

It’s a very common position with ample practical examples. While there certainly are valid counter arguments, they are a little more involved than “nothing is stopping you.” There is. Collaborating with others, for example.

pjmlp · 5 months ago
Easily said, applies to complaints in other languages as well, this is only doable for people that work alone.
pjmlp · 5 months ago
Agreed, it appears that since they changed to early release their have become pressured to add new language features every year.

As polyglot I have the advantage that I don't have to sell myself as XYZ Developer, and increasingly I don't think C# (the language itself) is going into the direction that I would like, for that complexity I rather keep using C++.

Just wait for when extension everything, plus whatever design union types/ADT end up having, and then what, are they going to add on top to justify the team size, and yearly releases?

Despite my opinion on Go's design, I think the .NET team should take a lesson out of them, and focus on improving the AOT story, runtime performance, and leave the language alone, other than when needed to support those points.

Also bring VB, F# and C++/CLI along, this doesn't not have to be C# Language Runtime, where it gets all the features of what designed as a polyglot VM.

johnh-hn · 5 months ago
I went to the .NET Developer Conference (NDC) in London at the beginning of the year. Mads Torgersen (Lead C# Designer, for anyone not in the know) gave a talk about some new proposed features. After describing a proposal to introduce new syntax to make defining extension methods easier, he asked if anyone had any questions. I asked a question along the lines of:

"I understand we sometimes need to address deficiencies in a language, but when we do stop? More syntax is leading to daily decision fatigue where it's difficult to justify one approach over another. I don't want C# to become C++."

It was interesting listening to the discussion that took over from that. The audience seemed in favour of what I said, and someone else in the audience proposed a rolling cut-off to deprecate older features after X years. It sounded very much like Mads had that discussion internally, but Microsoft weren't in favour. I understand why, but the increasing complexity of the language isn't going to help any of us long-term.

ochronus · 5 months ago
Life quality upgrade - needed? Depends.
reactordev · 5 months ago
Love to see conciseness for the sake of readability. Honestly I thought this was already a thing until I tried it a year ago…

I’m glad it’s now a thing. It’s an easy win, helps readability and helps to reduce the verbosity of some functions. Love it. Now, make the runtime faster…

vjvjvjvjghv · 5 months ago
It's starting to feel like C# is going down the path of C++. Tons of features that introduce subtleties and everybody has their own set of pet features they know how to use.

But the code gets really hard to understand when you encounter code that uses a subset you aren't familiar with. I remember staring at C++ codebases for days trying to figure out what is going on there. There was nothing wrong with the code. I just wasn't too familiar with the particular features they were using.

koyote · 5 months ago
There's a couple reasons I disagree with you on this (at the moment; as given enough time I am sure C# will also jump the shark):

* The above is just applying an existing (useful) feature to a different context. So there isn't really much learning needed, it now just 'works as expected' for assignments and I'd expect most C# engineers to start using this from the get go.

* As a C# and C++ developer, I am always excited to hear about new things coming in C++ that purportedly fix some old pain points. But in the last decade I'd say the vast majority of those have been implemented in awful ways that actually make the problem worse (e.g. modules, filesystem, ...). C#'s new features always seem pretty sound to me on the other hand.

jayd16 · 5 months ago
The difference is the language syntax choices are good. There's no "what does this const refer to" type confusion.
vjvjvjvjghv · 5 months ago
Agreed about the syntax choices. Much better than C++. The language is just getting a little too big for my taste.
moomin · 5 months ago
At a social level, I 100% agree with you because I’ve started to see those behaviours in the community. But considered technically, C++ is on a whole different level from C#. The community seems to embrace “What does this print?” style puzzles, and figuring out when perfect forwarding or SFINAE kick in is genuinely tricky.

Static abstract methods are probably the feature I see used least (so far!) and they’re not nearly as hard to understand as half of the stuff in a recent C++ standard.

zeroc8 · 5 months ago
That's why I keep using Golang instead of C#, even though C# is better in a lot of ways. But with Go, I normally can understand any sourcecode I'm control clicking into.

I've never gotten to this point with any other language, no matter how hard I tried.

klysm · 5 months ago
This is syntax sugar. Don’t conflate syntactic niceties with the semantic absurdity that C++ has.
swoorup · 5 months ago
A C# dev can't complain, because complexity creates job.