Readit News logoReadit News
kazinator · 3 years ago
Regarding the identity, XOR is a kind of addition, without carry.

    101
  ^ 011
  -----
  = 110
We added 1 + 1, to get 10. We put down a 0, but didn't carry the 1.

The carry is separately calculated using AND:

    101
  & 011
  -----
    001  <- carry out from LSB.
Of course the carry is carried so we have to shift it left: 010. We can just add the carry afterward (using regular addition which will propagate additional carries not calculated by the AND, like the one from the second digit.

Thus the identity:

  (a+b) = (a^b) + 2*(a&b).

vikingerik · 3 years ago
This works only for non-negative integers, right? If there's a sign bit, that won't work right with XOR, if one operand is negative then the sign bit comes out negative.
gavinsyancey · 3 years ago
If they're expressed in two's complement, this works for negative numbers as well.
captainmuon · 3 years ago
If I ever create a programming language, one thing I'd like to try is to promote integer operations to larger types, so there is never overflow. For example, if a and b are int16, then a+b is int17, and (a+b)/2 is int16 again.

In practice you'd store them in the next larger integer, so 32 bits for a 17 bit int. If you really want to cut off bits or overflow, you use a cast or modulo.

It seems really weird that the "correct" way to do this calculation is to resort to bit manipulation (which should be an implementation detail IMO).

chewbacha · 3 years ago
So, this is basically what dynamic languages do because they _can_ go back an allocate more memory transparently to the developer. However, static languages cannot change the size of the values dynamically at runtime without additional memory allocations. In fact, this would likely mean that all numbers must be heap allocated, which will likely be a performance penalty when working in high-performance systems. In these cases, using an algorithm that produces correct results without error with constant memory usage is preferred.
webstrand · 3 years ago
This should be possible for static languages? Since the width of the operands are known ahead of time, a wider integer can be unconditionally reserved on the stack for the result of the operation.

I'm curious which dynamic language reallocates to store larger integers? All of the dynamic languages that I'm familiar with simply store numbers as doubles, with variable width integers being handled by opt-in types.

rep_lodsb · 3 years ago
The result of any expression would still be assigned to a variable or function parameter that has a defined type, so it would be limited to that size. However, intermediate values could automatically use larger registers, the CPU's carry flag, or some other mechanism to expand the range.

It would be desirable that every expression either produces the mathematically correct result, or a runtime exception.

In many cases it would be easy for the compiler to limit intermediate results to some number of bits (since it knows the maximum allowed range for the final result), but it may be a problem to guarantee this.

kazinator · 3 years ago
> However, static languages cannot change the size of the values dynamically at runtime without additional memory allocations.

That is false; a static language could have bignum integers. E.g. you can easily have a bignum class in C++, which is static.

You can't have a variable-length bignum as an unboxed value type.

"Language with unboxed value types" and "statically typed language" are separate, somewhat related concepts.

PartiallyTyped · 3 years ago
Haskell is both compiled and has arbitrarily large integers. Granted, Haskell isn't used for high performance systems, but it can be used to generate programs for high performance systems.
kazinator · 3 years ago
You might not want that unconditional promotion in a systems programming language.

The problem in C that you can avoid is not taking into account the destination type of a calculation.

If you have in16 + int16, being assigned, returned or passed to a int32 type, then the calculation can be done in int32.

If the result is going back to an int16 type, then there is no need to go to int32.

In C expressions, the types are almost purely synthesized attributes: what that term means is that the information flows up the abstract syntax tree from the children to the parents. in a = b + c, the type of (b + c) is determined without regard for the parent = node. This is very simple and has the benefit of being not only easy to understand but in some ways referentially transparent: when we move (b + c) elsewhere, it retains its meaning (except for the part of what happens to the resulting value in the new context). More sophisticated rules can reduce errors though.

DerekL · 3 years ago
> If you have in16 + int16, being assigned, returned or passed to a int32 type, then the calculation can be done in int32.

By the way, if int has 16 bits (which is rare nowadays), then the calculation will happen in 16 bits. If int has more than 16 bits, then both operands will be promoted to that size before the operation.

ardel95 · 3 years ago
The correct way most people would write this for positive integers is:

if (a < b)

  return a + (b - a) / 2;
else

  return b + (a - b) / 2;

This method is just more efficient (for places where it matters) as it avoids divisions and branches. But for a vast, vast majority of use-case that tiny efficiency gain doesn’t really matter.

zimpenfish · 3 years ago
Is that the right way around? If `a<b` then `a-b` will be negative, surely, and adding that to `a` will be moving down from `a`, not from `a` to `b`?

cf https://go.dev/play/p/SgdWzzqDygn

Chinjut · 3 years ago
Wouldn't this still overflow when one of the values is positive, the other is negative, and their difference is sufficiently large?
glxxyz · 3 years ago
It can be rewritten without the test:

    return a - (a - b) / 2;

noasaservice · 3 years ago
you can also do

min(a,b) + (max(a,b) - min(a,b))>>1

andrewla · 3 years ago
It seems like if you did this you would need special syntax for accumulators and loops; there you cannot (necessarily) use static analysis to determine the proper types.
travisgriggs · 3 years ago
Amongst the many other dynamic languages that did this, Smalltalk was also one of them. Smalltalk took it a bit farther though. Python/Erlang will turn integer division (unbounded or optimized) into a floating point result.

Smalltalk had first class fraction objects as part of it's transcendental number system. There's a great story about Dan Ingalls changing one line of code to make all of the original Smalltalk BitBlt transparently work with fractions. I always miss having fractions as part of the transcendental math experience in "very high level languages".

The downside of these approaches, is that you can optimize some paths so that things stay relatively quick, but other paths will really slow down all of a sudden.

For example, in Smalltalk,

    16r7FFFFFFF timesRepeat: [ self fastOp ]
would allow you to do a microbenchmark on a fast operation. But if you moved to

    16r7FFFFFFF + 1 timesRepeat: [ self fastOp ]
would suddenly cause your benchmark to take 30x+ longer, because you had tripped from immediate tagged integer format to integer-as-an-object representation.

mkl · 3 years ago
Python integer division (//) will always return an integer. Proper division of two integers with / will return a float, not an exact rational (except when the float can represent it).
ipsi · 3 years ago
In addition to what everyone else has said, that would presumably prevent you from writing statements like "a++" or "a += 1" - instead you'd have to write "a = (a + 1) as u8", which seems like it would get very tedious, even if it is much clearer that it could overflow.
rep_lodsb · 3 years ago
In the proposed language (which isn't C!), that should produce an exception on overflow.

If you wanted it to wrap around, you could use an expression like "a = (a+1) % 256", or maybe something like Ada's modular types.

hota_mazi · 3 years ago
Rust has a collection of wrapping functions to cover this case, e.g.

    a = b.wrapping_add(c);

withinboredom · 3 years ago
The amount of cryptography that relies on overflows is amazing and annoying when trying to implement it in a language/framework that doesn’t overflow.
usefulcat · 3 years ago
The boost safe_numerics library (c++) has support for this (up to a point), in addition to detecting overflow at compile or run time.
gumby · 3 years ago
> If I ever create a programming language, one thing I'd like to try is to promote integer operations to larger types, so there is never overflow.

Are there any languages other than Lisp and Python that have automatic bignum support?

fulafel · 3 years ago
Pike, Haskell.

Clojure makes a gesture toward it, literals are automatically promoted but the normal operators don't do it:

  user=> (+ 1 9223372036854775807) ;; max signed int64
  Execution error (ArithmeticException) at user/eval5 (REPL:1).
  integer overflow
  user=> (+ 1 9223372036854775808) ;; one higher
9223372036854775809N

ErikCorry · 3 years ago
Dart initially had it, but JS compatibility was more important.

However they don't have JS compatibility either (Dart does not round off large integers like JS does), so I forget what the point was.

Dart also (very early) had infinite precision fractions. So if you divided 355 by 113 you didn't get 3 or 3.1415... you got the fraction 355/133 which was an instance of a first class numeric type.

Unfortunately this means your numbers can grow to need arbitrary amounts of storage in loops.

arh68 · 3 years ago
Erlang's got them, I'm pretty sure.
tudorpavel · 3 years ago
Ruby's Integer type works like BigInteger under the hood.
halpmeh · 3 years ago
The issue with this type of promotion is that you usually want to preserve the type. Like if I add two int32s, I probably want an int32 as a result.

A cooler feature would be requiring the compiler to prove the addition wouldn’t overflow.

Sesse__ · 3 years ago
> A cooler feature would be requiring the compiler to prove the addition wouldn’t overflow.

Wuffs is specialized enough to do exactly that (https://github.com/google/wuffs).

PaulHoule · 3 years ago
Many dynamic languages (e.g. Python, Clojure) do some variation of promoting numbers to larger types, usually not in small increments (keeping track of how many bits the number is probably adds an unreasonable overhead) but in large increments and ultimately promoting to an arbitrary precision int, a rational, or a BigDecimal. The people I know who are messing around with 5-bit ints and other irregular types are doing it with FPGAs where it unambiguously saves resources as opposed to costing them.
layer8 · 3 years ago
You eventually need bigints for that, and thus trade off performance against the no-overflow guarantee.
dahfizz · 3 years ago
What do you do when operating on the largest int type available?
captainmuon · 3 years ago
Maybe spit out a warning and use bignums? You are then supposed to explicitly narrow the type, or explicitly opt into using bignums.
tromp · 3 years ago
Don't have a largest int type?!
TheRealPomax · 3 years ago
if you're going that far, why bother with a "next largest" that's more than 1 byte larger? If you're just using it for intermediate values, just uplift that int16 to an int24, or that int64 to an int72, they're only there for as long as the LHS/RHS needs to evaluate: they're not getting stored, no need to use double the memory allocation.
jefftk · 3 years ago
What type do I get if I'm multiply four int32s?
captainmuon · 3 years ago
It should fit in 128 bits or in the range of [-2^128,2^128] (tiny bit narrower since max and min int32 are asymmetric). The question is what type do you want? Do you want the full result? Or do you want it to be modulo something? Or are you sure that the numbers are small, but just kept in int32 storage for some other reason?

The compiler could use static analysis to keep the size to a minimum, for example in this case:

    int32 a
    int32 b = 2
    var c = a * b
it would know that c just needs 33 bits.

dragontamer · 3 years ago
How many bits is x/y?

Assuming x and y are 32-bit integers

messe · 3 years ago
33-bits? INT_MIN / -1 = INT_MAX + 1.
throw827474737 · 3 years ago
and what is 3int16? nint16? you want to have an own calculation system about sizes in your calculations? Either go straight just unbounded ints like some languages (e.g. Python) already do, or nothing.. and it comes with a cost.
benibela · 3 years ago
Pascal does something like that.

But it only goes up to 64-bit, then you get overflow again

HappySweeney · 3 years ago
IIRC Dart 1 did this, and it was abandoned due to binary size issues.
planede · 3 years ago
Very much relevant:

CppCon 2019: Marshall Clow “std::midpoint? How Hard Could it Be?”

https://www.youtube.com/watch?v=sBtAGxBh-XI

mojuba · 3 years ago
This is an excellent interview question (I used it a lot). Reveals what the candidate knows about how the computers work. 90-95% of engineers don't even see a problem with (a + b) / 2 until you tell them about the overflow, let alone find a solution for it.

The majority of programmers have no idea what an int is in their favorite language and what its range is (roughly).

Then the majority come up with (a / 2) + (b / 2) until they run the unit tests and realize it's wrong.

And so on and so forth, with this question you can uncover layers upon layers of trivial (non-)knowledge.

gjadi · 3 years ago
I wonder.

IMHE this is the kind of edge that you know only because you've been bitten by a bug once.

It's the same with floating point numbers. You may know that the representation is not absolute, that you can end up with NaN. But I found that I only knew it viscerally after I banged my head on bugs related to these.

Of course, that could be provided by Comp Sci ou Comp Eng curriculum, but time is finite...

In the 5-10% of engineers who saw the problem, how many had experienced it once themselves before?

mojuba · 3 years ago
It's not just about seeing the problem but also knowing what you are dealing with. The majority of engineers or whoever call themselves engineers, don't know what an int is. Some Java programmers I interviewed years ago think the range of an the int type is, I quote "256", or "65 thousand-something", these were literal answers. Let alone it's not even a range!

So you are an Android engineer and you deal with ints a lot. Screen coordinates are ints on Android, so if you think the range of an int is "256" how do you think your app works at all?

This question reveals to me one of the most important things I'm usually looking for when hiring: natural curiosity. A software engineer should be curious about things he or she is dealing with. And that starts with the most trivial things like "what is an int really?" and then moves on to other concepts like: under the hood, what is a function?, what is a virtual method? what does `await` really do? And so on.

A good engineer should know how the computers work, and I don't know why this should be even questioned.

brudgers · 3 years ago
It's the kind of edge case I know because I just read the article...and it's probably bad if you've been bitten and that bite is still driving how you handle this case.

Because while it is easy to be bitten by this on at 16 or 32 bits, if it happens at 64 bits (1.8446744e+19) it's almost certainly an abstraction error like arithmetic on identifiers rather than values.

Back around 2010, I wrote some code for the first time in a very long time and that code initialized a 10,000 integer array and my first thought was "that's too big to work." Kilobyte thinking in a gigabyte future.

To a first approximation, as an interview question it fights the last war...again embedded systems excepted.

DenisM · 3 years ago
So what sort of answer do you find satisfactory? It seems like the "right" solution is non-trivial bit twiddling, do you expect people to come up with that, or stop sooner than that?
mojuba · 3 years ago
People rarely come up with a working solution within 30 minutes. Any solution is good as long as it doesn't overflow (there are a few suboptimal ones). But the point of this question is, again, to understand what they know about how the computers work.
lesuorac · 3 years ago
> This is an excellent interview question (I used it a lot).

Is it though? It just tells you if they know the trick or not. At that point you might as well have them fill out a form and use the score from that to hire/no hire.

It doesn't tell you if they understand what RAM is or storage (HDD/SSD/etc) or the various buses on a motherboard or pretty much anything about how a computer works. For the example given in the article, it's pretty rare for (a+b)/2 to overflow since the default ints end up being 32 bit (article calls out 64bit tbh) and your parameters are [-1000,1000].

---

> 90-95% of engineers don't even see a problem with (a + b) / 2 until you tell them about the overflow, let alone find a solution for it.

In my experience a similar percentage can't write a working DFS which I think is much more work related than midpoint.

msluyter · 3 years ago
My first real job was for a sdet position and I was asked how to test a function that takes 3 numbers representing lengths of sides of triangle, and determines whether the the numbers can form a possible triangle. E.g. (in python),

    def is_triangle(a, b, c) -> bool:
        ...etc...
One of the things an ideal candidate would realize is that the triangle inequality needs to apply (a + b >= c (for all permutations of a,b,c)), and that if a developer naively implemented the above via:

    if a + b < c:
        return false
it'd run into this exact problem.

I'd thought this question had gotten stale / overdone, but perhaps it's still a great interview question.

feoren · 3 years ago
This is so fraught with problems.

First of all, doesn't it also need to have the condition (c > a - b)? Maybe you just left this part out?

Secondly, you're worried that (a + b) could overflow. The triangles in your applications are just THE MOST EXTREME! That's how cool your application is! You have the MOST EXTREME TRIANGLES!

But wait! When are you dealing with integer lengths of triangles? You never specified they were integers. In 99.99% of all real-world applications dealing with triangles, their coordinates are floating-point. I think it's fair to say overflow isn't nearly your biggest arithmetic problem with floating points -- rather, once you get to extreme (and extremely different) exponent values, you have all sorts of accuracy and loss issues, long before overflow becomes an issue. Do you expect the candidate to also handle the case where one side is 1e-155 and another is 1e274? Because otherwise their code is "wrong"!

So left unspecified, your "gotcha" is completely ridiculous and just a mind-reading trap to arbitrarily filter out people who don't think exactly (and erroneously) like you do!

Or maybe you did mean that the sides of the triangle are constrained to integer lengths? That would be extremely unusual, so you absolutely need to specify that. But if you're constraining the side lengths to integers, are you also constraining the vertex coordinates to integers? It would be extremely strange to require integer lengths but allow floating-point coordinates; but it would also be extremely strange to only allow integer coordinates, as most triangles with integer coordinates have a least one non-integer side length! And it doesn't sound like a trivial problem at all to find whether a given set of integer lengths could form a triangle on an integer lattice, but that seems to be what you maybe think you're asking? Do you even know what you're asking?

If integers are involved at all, it's far more likely that the coordinates are integers, but the side lengths are floating point.

What a tremendously awful interview question. I really hope your description is just extremely limited and flawed and mis-remembered, because if that's the actual question, you are perpetuating the "arbitrary leetcode interviews" problem that competent software engineers always complain about when they look for jobs.

naasking · 3 years ago
> Then the majority come up with (a / 2) + (b / 2) until they run the unit tests and realize it's wrong.

Seems like you can cover the corner case easily enough:

    x/2 + y/2 + (x & y & 0x01)
A few more operations than the article though so not the most efficient solution.

pksadiq · 3 years ago
> x/2 + y/2 + (x & y & 0x01)

This returns -1 for x = INT_MIN and y = INT_MAX were the answer should be 0 (for an example). so not a correct solution

brudgers · 3 years ago
Per Google (suddenly I feel a need to make it clear I didn't use chat GPT)

  2^64 is 1.8446744e+19
To a first approximation, if your application is overflowing 64 bit integers, the problem is in the abstractions not sloppiness in low level details...something like doing arithmetic on IPv6 addresses.

What I mean is that it's one think if you specify a 32-bit or 16-bit architecture because the job entails low level programming with that constraint.

But entirely another think if it is used as a general test of software engineering skill because on the one hand, now I know the trick of the trick question and you wouldn't hire me based on my software engineering chops.

And on the other hand, in the vast majority of cases, the solution that might overflow is not just the simplest thing that might work, but it will also work well enough for the business purpose and be easier to support and maintain in the code base.

Finally, handling problems like this culturally during development and after-action debriefing is better healthier than how-many-golfballs at the interview stage...like I said, I know the answer.

feoren · 3 years ago
Completely agreed. The statement "that function is wrong because it could overflow" is putting one potential edge case on a pedestal while ignoring all other considerations for why you might prefer one alternative over another. The vast majority of the code in most business applications can perform straight arithmetic on 64-bit ints without worrying too much about overflow -- you just never encounter numbers like 2^63, and only rarely encounter numbers that wouldn't fit in a 32-bit signed integer.

When you write a bit of code, you naturally have in mind the realistic range of values you'll be working with. Even if it's just within 4 orders of magnitude. You know whether you're dealing with thousands or quadrillions. In the extremely rare case it's the latter, then you start worrying about this. You just can't worry about being 10 orders of magnitude off in your assumptions all the time -- that's what Exceptions are for. Holy crap, this software is dealing with values ten orders of magnitude different than you programmed for!? Absolutely throw an exception in that case, because it requires a complete rewrite of that area of the code.

Yes, if you're writing midpoint in a language-standard math library, it should work for all possible inputs. But the point of looking at toy problems in software engineering blogs is to inform us how to write our much more complicated, much more domain-specific code, and these lessons just don't cleanly transfer into that world.

TacticalCoder · 3 years ago
I have a question for the interviewer, maybe a bit metaphysical: what should your function return when one parameter is zero and the other is 1?
roflyear · 3 years ago
Just need to define your behavior there. It similar to why languages choose which rounding method to use.
heavenlyblue · 3 years ago
> Then the majority come up with (a / 2) + (b / 2) until they run the unit tests and realize it's wrong.

Why is that wrong?

eloff · 3 years ago
It's mathematically correct, but integers use truncating (floor) division. So you can lose a fractional 0.5 from both a/2 and b/2, which would have added up to 1, and now your result is off by one from (a+b)/2.

In general, even with floating point numbers you can get different results by rounding implicitly or intentionally the intermediate values versus the end result.

stevoski · 3 years ago
If a=1 and b=1, the midpoint should be 1.

But in computing, assuming we are only dealing with integers, 1/2 == 0. So 1/2 + 1/2 returns 0.

Jagerbizzle · 3 years ago
The dev blog from Microsoft in a different comment covers this:

"This will be too low if the original addition contained a carry from bit 0 to bit 1, which happens if bit 0 is set in both of the terms, so we detect that case and make the necessary adjustment."

return (a / 2) + (b / 2) + (a & b & 1);

scajanus · 3 years ago
If a & b are both odd, you end up rounding twice vs. 0 times if you take the sum first.
nathell · 3 years ago
Because, assuming / means integer division, this would return 6 as the midpoint between 7 and 7.
robofanatic · 3 years ago
If its a known issue then why don't compilers spit a warning?
eloff · 3 years ago
You don't necessarily need that actually, there are carry flags that are set on overflow that you can check to get that extra bit of precision.

In Rust you can actually do this using the checked add/sub/etc methods. It's one of the things I really appreciate about the language. By default, it panics on overflow in debug builds. You have to use the wrapping variants to declare that's intentional.

How about:

  if a > b { 
    mid = (a-b)/2 
  } else { 
    mid = (b-a)/2 
  }
There must be a way to do it without the branch? edit: Yes, in the article, I'm an idiot.

tromp · 3 years ago

    if a > b { 
      mid = (a-b)/2
That only avoids overflow on unsigned types (where it would be called wraparound) ...

shkkmo · 3 years ago
(a / 2) + (b / 2) + ((a % 2) + ((b % 2)) / 2) works and can be extended from 2 to n to find the average of a set of integers.
a1369209993 · 3 years ago
Note that this can equivalently[0] (and more readably) be written as `(a/2) + (b/2) + (a&b&1)`.

0: Assuming your language rounds integer division and modulo correctly, ie that `i%array_len` is reliably a valid array index. C has problems here when i (respectively: a or b) is signed, but that doesn't matter in the sensible case, where you always index everything with size_t.

roflyear · 3 years ago
It's good I guess if people use languages where it matters. Coming up with a real solution in C seems really annoying.
gilbetron · 3 years ago
lpage · 3 years ago
Joshua Bloch did a great post [1] on how nearly all standard library binary searches and mergesorts were broken (as of 2006) due to this exact issue. The punchline is that the bugs started cropping up when the need and capability to sort O(2^32) element arrays arose.

[1]: https://ai.googleblog.com/2006/06/extra-extra-read-all-about...

pksadiq · 3 years ago

  int mid(int x, int y) {
    return (x/2 + y/2) + (1 & x & y);
  }
would be a more readable solution

edit: Actually, this fails on mid(INT_MIN, INT_MAX) and possibly other mixed sign values (returns: -1, expected: 0 (or -1 is okay?), where the precise answer is -0.5)

more edit: The C standard says: When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded (This is often called ‘‘truncation toward zero’’).

So -1/2 should be 0.

nwellnhof · 3 years ago
For signed ints, try

    (x >> 1) + (y >> 1) + (x & y & 1)
This rounds toward negative infinity. Also

    (x >> 1) + (y >> 1) + ((x | y) & 1)  // Rounds toward +Inf
    (x >> 1) + (y >> 1) + (x & 1)        // Rounds up if x is odd
    (x >> 1) + (y >> 1) + (y & 1)        // Rounds up if y is odd
I'd be curious if there's a bit-twiddling trick for rounding toward x or y.

tiffanyh · 3 years ago
Using the midpoint function is the most readable.

https://en.cppreference.com/w/cpp/numeric/midpoint

Though it's odd this allows overflow.

abainbridge · 3 years ago
std::midpoint also seems to yield less efficient code with g++12 targeting x86-64: https://godbolt.org/z/j695ce98Y
abainbridge · 3 years ago
Possibly true, but it yields less efficient code in GCC 12 for x86-64. https://godbolt.org/z/oafPrb4K8
bonzini · 3 years ago
Because it's also wrong! / rounds towards zero, while summing a&b&1 requires the division to round towards negative infinity.
neilv · 3 years ago
There's an entire category of bit-twiddling methods for different operations.

They were developed by programmers at places like the AI labs at MIT and Stanford, in much earlier days of computing, when size and time constraints sounded much more in everyone's faces than today.

delta_p_delta_x · 3 years ago
Also related: Quake III fast inverse square root[1].

[1]: https://github.com/id-Software/Quake-III-Arena/blob/master/c...

mkl · 3 years ago
hgsgm · 3 years ago
In the old days where programmers knew that everything computers did was AI.
neilv · 3 years ago
IIUC, Marvin Minsky, at some early point in "AI", wanted to build it with computers as a method for understanding how the human mind works. I think Computer Science came a little later.

When I got into "AI", much later, coming from CS and software engineering thinking, "AI" seemed to be "things we currently think are hard for computers to do (and useful computational methods that we've already figured out)".

Now "AI" is getting different, and generalized AI is looking more plausibly attainable than it was shortly before DL. (Minsky thought it would happen quicker, and also speculated explosive growth of capability once the AI could teach and evolve itself.)