It infers types. If I do
const data = [
{ name: 'bob', age: 35, state: 'CA' },
{ name: 'jill', age: 37, state: 'MA' },
{ name: 'sam', age: 23, state: 'NY' },
];
Typescript knows data is an array of { name: string, age: number, state: string }. I don't have to tell it.Further, if I use any field, example
const avg = data.reduce((acc, { age }) => acc + age, 0) / data.length;
It knows that `age` is a number. If I go change data and add an age that is not a number it will complain immediately. I didn't have to first define a type for data, it inferred it in a helpful way.Further, if I add `as const` at the end of data, then it will know 'state' can only be one of `CA`, `MA`, `NY` and complain if I try to check it against any other value. Maybe in this case 'state' was a bad choice of example but there are plenty of cases where this has been super useful both for type safety and for code completion.
There's insane levels of depth you can build with this.
Another simple example
const kColors = {
red: '#FF0000',
green: '#00FF00',
blue: '#0000FF',
} as const;
function keysOf<T extends string>(obj: { [k in T]?: unknown }): readonly T[] {
return Object.keys(obj) as unknown[] as T[];
}
type Color = keyof typeof kColors;
const kAllColors = keysOf(kColors);
Above, Color is effectively an enum of only 'red', 'green', 'blue'. I can use it in any function and it will complain if I don't pass something provably 'red', 'green', or 'blue'. kAllColors is something I can iterate over all colors. And I can safely index `kColors` only by a ColorIn most other languages I've used I'd have to first declare a separate enum for the type, then associate each of the "keys" with a value. Then separately make an array of enum values by hand for iteration, easy to get out of sync with the enum declaration.
In other words: after all this time, I feel that Tony Hoare framing null references as the billion-dollar mistake may be overselling it at least a little. Making references not nullable by default is an improvement, but the same problem still plays out so as long as you ever have a situation where the type system is insufficient to be able to guarantee the presence of a value you "know" must be there. (And even with formal specifications/proofs, I am not sure we'll ever get to the point where is always feasible to prove.) The only real question is how much of the problem is solved by not having null references, and I think it's less than people acknowledge.
(edit: Of course, it might actually be possible to quantify this, but I wasn't able to find publicly-available data. If any organization were positioned to be able to, I reckon it would probably be Uber, since they've developed and deployed both NullAway (Java) and NilAway (Go). But sadly, I don't think they've actually published any information on the number of NPEs/panics before and after. My guess is that it's split: it probably did help some services significantly reduce production issues, but I bet it's even better at preventing the kinds of bugs that are likely to get caught pre-production even earlier.)
Haha, no. Microsoft barely talks about F# at all, and has largely left the evolution of the language up to the open source community that supports it. Furthermore, you shouldn't take your cues about what a language is best suited for from marketing types, you should evaluate it based on its strengths as a language and broader ecosystem. If you seriously doubt that C# is a better comparison to F# than Python, then I suspect you haven't used either C# or F# and you're basing your views on marketing fluff.