Saturday, 14 November 2015

Compiler 3 Part 8 - Finale

This will end up being the longest, and final, post in the series. So let's tuck in!

Testing and Everything Else

The scanner and most of the infrastructure stayed exactly as it was.

The parser and AST did see some changes. The parser, in particular, saw several bug fixes and simplification. Some of the error checking that had been done during parsing has been passed on to the semantic analyzer. Similarly, anything related to type checking could be removed.

Also, the AST-related scoping code could be simplified and confined to the parser since it is discarded during semantic analysis.

Detecting “true” and “false” was added for boolean the type.

With those changes out of the way, I can say that if there is one area that Calc has truly lacked, it was in testing.

A compiler project should, in my opinion at least, be concerned with correctness. Correctness and consistency. The IR was one step in shoring up Calc in this area and testing is the other. Calc has always incorporated at least a little testing but it has never been as comprehensive as it should be.

Go provides great testing facilities right out of the box. The cover tool has also proven helpful in sussing out corner cases that need testing.

I have, I think, made large strides have been made in making Calc a much more stable compiler.

That isn’t to say that it’s perfect. I’m sure within 5 minutes of playing with it someone will discover some bugs. Hopefully, those than find them will take the time to submit an issue so I can create tests and fix them.

This is the value of having an open source project.

The Future...

Calc has continued to evolve along with my own skills and I hope that I’ve helped someone, in some way, to further their own growth. Whether it be teaching someone how to, or how NOT to code, the end result remains the same. Someone, somewhere, has benefitted from my efforts.

Now that I’ve had a chance to regroup from the herculean effort of writing and implementing the Calc2 spec and teaching series, I feel like I can continue to produce more content.

I do hope to continue talking about compiler construction. In the future, I hope that by using a smaller format, like this one, that I can produce more posts more quickly. I can certainly say that this third series has been much more enjoyable to write!

I have also been working on another side project to create a virtual machine with a complete software stack. It has an assembler, linker and virtual machine running it’s own bytecode. If there is interest, I’ll write some blog posts about it. You can find the project here.

I have concluded both previous series talking about future plans and I shall do no less today. In series two I had a list of things I wanted to implement and change. Let’s go through the list:

  • growable stack - well, this is null and void. Calc no longer manages it’s own stack.
  • simple garbage collector - I no longer plan on implementing a garbage collector myself. See below.
  • loops - not implemented
  • generated code optimizations - done! 
  • library/packages - not implemented
  • assembly - null and void. I don’t plan on generating assembly.
  • objects/structures - not implemented
Only one of the seven features I wanted to implement was actually completed. Of the remaining six features, three of them no longer make sense. The groundwork for other three has been laid down. In particular, loops and structures only await language design since implementation should be almost trivial.

Incorporating the remaining three features, I now present you with an updated list:

  • #line directives for source-debug mapping - I think this is a very reasonable and easily obtainable goal. It will make debugging Calc with the GNU debugger easier.
  • logical && and || operators - are easily added with minimal work
  • importing libraries - thanks to the x/debug/elf package, importing libraries into code may not be unreasonable to achieve in the near future. The scope of adding imports is likely a series in of itself.
  • garbage collection - Calc 2.x does not need garbage collection. It does not have pointers and it does no memory allocation. However, when it does, I will probably use the Beohm garbage collector rather than attempting to spin my own. Additional info can be found on Wikipedia. This feature will be far in the future so don't expect it any time soon.
  • structs, loops and arrays - with the new code generator, implementing structs and arrays on the back end ought to be trivial. Work on this has actually already begun and you can view some of the changes here. Sorry, the specification is not published publicly right now.
  • stack tracing - there isn’t much nothing holding me back from implementing stack traces on crashes now that the new code generator is in place. Time and effort are all that remains.
And that, as they say, is that! Thank you for taking the time to read through this series! At some point I’d like to put everything into a PDF but that would be a large task since large parts of both series would need to be re-written entirely. In fact, I might even want to start again from scratch.


Until next time!

Friday, 13 November 2015

Compiler 3 Part 7 - Code Generation

Here we are at the final step.

As mentioned in the introduction, I cut out a massive amount of code by offloading much of the work on the IR.

So here is what has changed:

I introduced a small function to map Calc types to C types. This could probably exist in ir.Types, too, but I chose to keep it coupled with the code generator since that’s the only code that uses it.

Any object with an ID has its original named stripped and is replaced by a generic variable name. These names start with an underscore and the lower-case letter “v” followed by the ID of the object.

Each binary operation is assigned to a new variable (a reason for why C99 is required). Don’t worry about this being wasteful. Even if you chose to output a chain of infix arithmetic (1 + 2 + … + N) the underlying assembly instructions usually take no more than two operands anyway. This is why using the above method works so well since it more closely matches the machine code.

Note: I've spent a lot of time comparing the assembly generated from C to see how things work. This is a fun (am I sick?) and interesting exercise to try yourself.
Even if you didn't follow along with the last series, I encourage you to view the previous binary code generation function. 58 lines of confusing mess now pared down to a simple 5 line function. Perhaps more importantly, the generated code is easier to read and follow even though it’s not really intended for visual parsing.

Another important change is using C-style function calling and function declarations. This was made possible by the new IR and type system. With every object being assigned a valid type, we can easily create proper C function prototypes and definitions.

By utilizing the SSA-like code generation and the new IR it also becomes trivial to use C-style calling convention. Types have already been checked, the number of arguments verified, and sub-expressions have been assigned ID’s. Therefore, only raw values and object IDs are passed into the function.

All in all, fairly simple.

Tuesday, 10 November 2015

Compiler 3 Part 6 - Tagging

This step is crucial to Calc’s code generator but may not exist at all in other compilers. Regardless, it makes code generation for Calc dead simple. Before I can get into the process, you do need a little background information.

In the introduction I mentioned something about 3AC and SSA. Make sure you check out the articles but the cut and dry is thus:

In SSA, each calculation is given a unique ID. These ID’s replace variable names and other objects.

So, what does this mean to us?

Consider the following two example infix arithmetic operations:


  1. a + b + c + d 
  2. a * (b + c) / d 

These two operations could be translated into something like the following:

Example 1: a + b + c + d
r1 := a + b
r2: := r1 + c
r3 := r2 + d


Example 1: a * (b + c) / d
r1 := b + c
r2 := a * r1
r3 := r2 / d


As you can see, the result of each binary operation is assigned to a new, unique variable. While verbose, it is much more akin to assembly and easy for C compilers to optimize.

It also ensures that calculations are done in the correct order and removes the necessity of pushing and popping operands to the stack.

Armed with knowledge, we can now get on with what I call tagging.

Tagging is the process by which we attach these unique identifiers to each operation. Variables, parameters, binary operations and unary operations all need to be tagged.

Oddly, perhaps, even if statements need to get tagged and you may wonder why that is. Well, if statements in Calc are not statements. They’re expressions. Like if expressions in functional languages, if expressions in Calc return a value.

As part of escape analysis and type checking, the value of any branch in an if expression must be checked and tagged since it’s result may be used elsewhere in the code.

As you can see, Folding constants can potentially save us time by reducing the number of operations needing an ID.

We can traverse the tree in any manner we chose provided that every ID is unique.

One more stop left to go. Code generation!

Monday, 9 November 2015

Compiler 3 Part 5 - Constant Folding

The only optimization included in series 2 was constant folding. Unlike in the previous series, we’re now much better equipped to handle this optimization.

I feel that I should point out that most C compilers do this step, too. I was somewhat reluctant to keep it in at first. However, my reasoning for keeping it is two-fold: one, I think learning a bit about optimizations on an IR is a worthy lesson; and two, it can help make the next step a little quicker.

Once again, the IR comes to the rescue! In the first step of transforming the AST into the IR we created constants. These were objects representing actual values. This crucial step makes it much easier on us now.

Unary and Binary operations are ideal candidates for folding. We have already done the work of verifying types and correctness so we can ignore all that now. When we check if a value is a certain type we can be confident that information is correct.

Looking at the code you can see that it’s pretty simple. Traverse the tree depth first.  If both operands of a binary object, or the single operand of a unary object, are values of the correct type we can fold them together. We then return the result as a new constant value and replace the previous object with it.

Moving back up the tree we repeat the process until we exhaust all the foldable values.

You can see the value in converting basic literals into constants when building the initial IR tree. It makes folding constants together much, much simpler.

Onward and upward!

Sunday, 8 November 2015

Compiler 3 Part 4 - Type Checking

Whether you chose to produce assembly or an intermediate language, the job of the IR is to take us ever closer to generating actual output. We need to do the crucial step of type checking. Since Calc’s type system is strong and static it is necessary to ensure type safety before moving on.

Each struct in the IR tree has an underlying object. This object provides a Pos() function that maps the remaining constructs back into the original source code. This will be useful to us when we are doing type checking so we can provide well formed error messages.

We must traverse the tree and verify that expected types match. Not all objects currently have a type, like variable declarations using type inference. This means that we don’t actually know the type of the variable when parsing the source language since the type isn't explicitly declared but inferred from the object being assigned to the variable. 

Similarly, binary expressions do not have a defined type when parsing. Your language may allow adding (concatenating) strings, in which case an add binary operation on strings would have the type string. In the case of Calc, logical comparison operations make a binary expression type bool. Don't forget that most languages have different integer types so your binary expressions will need to take on and verify these types.

This requires we traverse the tree depth first to identify the types of certain structures prior to verifying their correctness.

In addition to verifying type correctness, the type-checker also does some additional tests…
  • checks that function parameters match the number of arguments in function calls;
  • verifies that an identifier matches the right kind (variable vs. function names);
  • checks for undeclared variable and function names.

While not true of the current version of Calc, many languages have varying bit sizes of integer types. Somewhere during this process you would will need to verify if a particular value fits within the bounds of a particular type. If the source has a value greater than 255 being stored into an unsigned 8 bit type, this is a problem. When and how you chose to handle this error is up to you.

Type checking may also involve a level of escape analysis. You need to verify that any value being returned anywhere in the tree conforms to the parent type. In imperative languages, this means looking for statements like “return”. For functional languages, you need to search for terminal points like the "then" and "else" clauses in an "if" expression.

Once we are sure that everything is type safe and correct, we can move on.

Saturday, 7 November 2015

Compiler 3 Part 3 - Intermediate Representation

After presenting Calc2 I came to discover that many parts of the compiler were quite fragile. The type checking was particularly bad, occasionally reporting type errors (false positives) when run over the exact same source code multiple times. 

That’s a serious problem.

I had assumed that using an intermediate representation was strictly for doing optimizations. While they certainly help with, and may be crucial for, optimization they can be useful for much, much more.

The IR used by Calc is nothing special. In fact, it looks very much like the AST and the objects it uses share many of the same names. So what gives?

If you look more closely, there are some crucial differences. For one, each object is represented a little differently. Second, there is much more in common between each object. In fact, there is a struct embedded into every one: object.

This struct gives me access to almost everything needed to perform various tasks, including: type checking, error reporting and code generation.

While the AST and IR may both be trees, and represent much of the same data, they represent two very distinct things. The AST, as the name implies, represents the actual syntax of the language. Using it, you can somewhat faithfully recreate the original source code. The IR, on the other hand, sits much closer to code generation and thereby represents the code we wish to output.

I look at the IR like this: it acts as a bridge between (intermediate) the syntax and code (representation) generation.

The first step is transforming the AST into IR. This is done with a simple tree walking algorithm, converting each node into the new form. Typically, the compiler starts with MakePackage. From there, you can follow the calls along to see how the tree is built.

There are a few things worth noting.

First, notice that the parameters of functions get separated into objects stored in a function's scope and only a string slice of parameter names is stored with the declaration. This will become important in the next few steps.

Second, we can also begin a simple optimization of converting literals, like number and boolean strings, into actual values called constants. This saves us the step of doing these conversions later on and provides an opportunity to do constant folding in a much more efficient manner.

Once we’re done converting the AST into the IR tree, the real work can begin. Stay tuned!

Friday, 6 November 2015

Compiler 3 Part 2 - C Runtime and Code Generation

Before diving into the real meat and potatoes of the changes that led to the runtime being (temporarily) removed, I thought I should discuss the motivations.
The runtime irked me greatly because I felt that I did not leverage the power of C nor the C compilers. There were many, many improvements needing to be made and as I began working on them I came to a realization.

The runtime, at least in its current form, is not very useful.

Unlike in a language like Go where the runtime serves a very real purpose (managing goroutines, the garbage collector, etc) the Calc C runtime merely tried to mimic what it’s like to program in assembly.

Of course, there are reasons to re-add a runtime further down the road. Runtime checks, like bounds checking, are one such reason. Stack tracing is another wonderful motivation to implement a runtime. Built-in functions are also common and useful.

I still maintain this was a worthy exercise, long-term it just wasn't viable. The more I looked at the runtime the more I knew I could do better.

The largest hurdle was removing the stack. How was I to handle the call stack? As it turns out, it’s not too hard to use C-style function calls but it did require some changes that the series 2 code base just wasn't equipped to handle elegantly.

I also needed to push and pop data on and off the stack. So how could I rid myself of it?

I have been researching on how to use C as an intermediate representation for many months. Generating C function calls and declarations were proved difficult with the Calc2 compiler and that was one of the motivations for using an assembly-like approach.

Ultimately, it became more and more clear that I was missing a step that would make my life a lot easier.

That secret, if you will, was intermediate representation.

After much research I ran into some potential solutions: three address code and static single assignment. I have been aware of these languages and others like them (Gimple and RTL) for a while.

Using them seemed complicated and I didn't think they were worth the effort of learning about. I wasn't writing an optimizing compiler so what advantages did they provide that I would need.

While not using any of these representations directly I do take advantage of what they provide. More on that next post.

Wednesday, 4 November 2015

Compiler 3 Part 1 - Introduction

I return to you once again with continued improvements and changes to the Calc programming language compiler. In the intervening months (almost a year!) since I completed the last series I have been busy with many other projects. However, I keep coming back to the compiler and in the last few months have been busy tinkering away.


I have taken a different route in this next iteration of the series and you could consider it a partial rewrite of the previous one. There are several knobbly sections in series two that I wasn't particularly happy with and in this third series I hope to address most, if not all, of them.


I also intend to make this third, and any potential future series, much shorter and narrower in scope. Please feel free to ask me questions as I will be glossing over some of the finer points.


So what has changed?


Well, the language itself changed almost not at all. The only additional language feature is the addition of the boolean type. So, nothing too exciting there.


What did change was the entire back-end.


In series two, the code generator operated directly on the AST. It proved to be cumbersome and error prone. The compiler now uses an intermediate representation for the code generator that reduced the LOC from 544 to 255. Less than half the LOC while producing better, more easily optimized code.


That does not even include the removal of the runtime library!


Yes, the entire runtime has been removed which included many hundreds of lines of code being deleted. The Calc language went on a diet!


The net gain is a much better compiler and I want to share those efforts with you all.

Stay tuned!

Series 1 - Novice Compiler Design
Series 2 - Apprentice Compiler Design

Saturday, 14 March 2015

Objects In Memory

Until recently, I can’t say that I ever gave much thought to how objects are represented in memory. I don’t write performance critical applications where a few milliseconds of extra speed might matter. I also don’t make programs for hardware with limited resources.

In short, I didn't feel like I needed to know.

That is, until I started learning about compilers and assembly. Suddenly, how data was represented in memory mattered and I began having to ask a lot of questions.

Questions like: How does a struct or array look like in memory? How are their elements accessed? Is extra space is needed for the struct itself?

There are even more questions to ask about word boundaries, cache hits and paging but for the sake of brevity, we’ll keep it to just those posed.

Because Science!

OK, not really.

C is pretty close to the hardware and I probably understand the x86 assembly for it best so I chose to write an example in that language and peeked at the assembly output.

struct { int a, b; } s = {1, 2};

A simple sizeof call revealed that the above struct was 8 bytes. Each integer takes up 4 bytes of space so this is a reasonable result.

Next, I inspected the assembly generated by gcc. The values 1 and 2 were moved onto the stack, side by side. We would call this segment of memory contiguous.

Finally, I wrote a third test which passed the struct as an argument to a function. Again, only the two values, represented by ‘a’ and ‘b’, were copied onto the stack as arguments to the function.

Other than how they were arranged in memory, the fields of a struct are treated like any other variable.

Conclusions, Struct Test

Based on my tests, I was able to draw the following conclusions:

  • The space needed for a C struct is the sum of its fields (see note below);
  • The memory for a C struct is kept in a contiguous block of memory for easy access.
Note: Actually, in truth, data types might need to be aligned and can have extra padding added making a struct an unexpected size, depending on its layout. If you're interested in learning more, this link was recommended to me.

In the end, a struct is only meaningful to the compiler and programmer. Each field is merely a memory address and nothing else about the struct is known to the CPU.

What about methods?

Function Pointers

Actually, before we can discuss object methods, we need to talk functions. To talk about functions, we need to cover pointers.

A pointer is just like any other variable except that the value it holds is a memory address.

The size of a pointer is the size of a memory address in the target system. If the platform is 64 bits, a pointer is 8 bytes. If it’s 32 bits, the pointer is 4 bytes and so on.

void func(void) {};
void (*f)(void) = func;


The sizeof f on my system, which is 64bits, confirms that a pointer is 8 bytes.

Note: Taking the sizeof the function name, in this case func, gives the size 1 byte. To understand why, check out this response on Stack Overflow.

Methods

A method might seem a lot more complex than a function pointer but I’ll quickly demonstrate why it’s not.

In Go, you see something like this:

type myStruct struct {}
func (s myStruct) Method() {}


So why is it not so complex? Well, its because another way to express this same method call is:

func Method(s myStruct) {}

Once you recognise that a method call is merely syntactic sugar, dispatching the call becomes much simpler. In both cases, move the data in argument s to the stack or a register and call the function as normal using the first parameter as the struct data.

Arrays

Armed with knowledge we can deduce how an array works pretty easily. An array is merely a segment of contiguous memory.

int arr[5] = {0, 1, 2, 3, 4};

The assembly for this C array moves each value onto the stack one at a time and side by side. All the data for the stack is kept packed together but other than that there’s nothing special going on.

All the CPU needs to know is where the first element of the array can be found. We can access the rest by using a byte offset from the first element.

Note: If you ever wondered why an array in C is a pointer (like I did) then there’s your answer.

To access a specific index, x86 assembly uses SIB (scaled index byte) addressing: base + (scale * index).

Base is the address of the start of the array, in this case the pointer arr. The scale is the size of each element. If using a standard word size element on a 32 bit machine, it would be 4 bytes. From the address of arr, add 4 bytes multiplied by i to get the ith element.

Note: In C, arr[i] is equivalent to arr+i for this very reason. In both cases, you are accessing the ith memory address relative to arr’s address.

Note: A C array does not contain any information about the array itself, like its length. This is why C strings use a NULL character to terminate strings. It’s the only way that C functions can detect when it has reached the end of a character array. It would also another reason the C specification does not have bounds checking.

Implementing

I hope my explanations will help you implement arrays, structures (or classes), function pointers and methods in your own language. Much of these things seem very daunting but it’s not actually that hard.

The hard part is wrapping your head around it!

In the end, the generated code is pretty simple. The complexity is, generally speaking, going to be in your compiler.

If you’re interested in what some Go types look like, check out this blog post by Russ Cox here.

Sunday, 8 March 2015

Concurrency vs Parallelism

Today, while I was reading up on instruction pipelines, I found my mind wandering slightly off on a tangent. I started to think about concurrency and parallelism.

It occurred to me that there is a really simple way to illustrate the difference if you’re struggling with the concept yourself. There is also a great talk by Rob Pike called “Parallelism is not Concurrency” that may help you, too, that goes a lot deeper into the comparison.

Some Kind of Process

Let’s use an example of a computer assembly line. We can define a series of steps that must be followed to properly build our computer. Each stage is as follows:


  1. Place empty case on conveyor and put the cover beside it..
  2. Insert and fasten the motherboard to the case.
  3. Insert the CPU, Memory and a graphics card.
  4. Insert a disc drive and a hard drive.
  5. Finally, insert the power supply unit, hook up all the cables and close the case.


Note: A common word, and metaphor, for a process like this is a “pipeline” because the product of each stage flows into the next and only travels in one direction.

Concurrency

Each step is dependant on the one that comes before but each can be worked on independently of the others. Each stage can have a separate person working on them or a single person moving between each step.

Concurrency, to my mind, is taking a process, a task that requires a specific order, that you can divide up into steps that can be worked on independently of the others.

Parallelism

Adding a second line, completely separate of the others, is an example of parallelism. It requires, at minimum, two separate workers for each line.

Concurrency in Parallel

A single worker is like a single core CPU. This worker must manage its time between each task, bouncing between each stage.

This same process can have many workers each one attached to a stage in the process. This is a concurrent solution that has been parallelized.

Summation

Concurrency is a design pattern solution and does not have direct hardware constraints. Concurrency solutions can run in serial but often can be easily run in parallel.

Monday, 19 January 2015

Fiddle and Bits and Bits and Bits

My last post didn't fully answer the question: "What are the possible values a variable can be?" I will complete the answer here.

I will use 8-bit bytes for the entirety of this article.

Least Significant Bit


The least significant bit is the right most bit. It is so named because it represents the smallest possible number in the binary system: zero or one. It is the least significant value of the eight.

Don't let that fool you. It can be quite useful. More on that at the end of the post.

Byte of Bits


Lets look at an empty, zeroed byte: 00000000 (eight zeros).

The least significant bit is the right-most bit. So let's switch that bit on: 00000001.

We now have the number one. Move the bit up one slot and the value becomes two: 00000010.

How did that happen? Maths!

Math


Each bit in the byte is 2's compliment (see my previous post).

Each bit's value is 2 raised to a power. Starting at zero, incrementing by one for each step, move right to left and raise 2 to that power. The least significant bit is 2^0. The second least significant bit is 2^1, then 2^2 and so on. Sum the bits turned on for the value the byte represents.

Check it out:

Binary
Equation
Decimal
00
(0 x 2^1) + (0 x 2^0) = 0 + 0
0
01
(0 x 2^1) + (1 x 2^0) = 0 + 1
1
10
(1 x 2^1) + (0 x 2^0) = 2 + 0
2
11
(1 x 2^1) + (1 x 2^0) = 2 + 1
3

The value of each bit, from left to right, looks like this: 128, 64, 32, 16, 8, 4, 2, 1.

If you sum all these values, you get the value: 255. That's the maximum value of an unsigned byte!

Signedness


Ok, time to makes things weird. Well, not finding out your life-partner is your cousin weird but weird enough.

On the opposite side of the byte from the least significant bit is the most significant bit.

In many languages a type may be signed or unsigned. An unsigned byte can only be positive, whole numbers. A signed byte can be positive or negative whole numbers.

Most Significant Bit


A computer doesn't have a notion of a negation sign. It can represent a negative number, sure, but all it sees are bits. Using two’s compliment, how can a single bit be a sign when it must be a number?

The most significant bit does determine whether the value is negative or not. However, the eighth bit always has a value of 128.

Confused, yet? Good!

More Maths


When a computer interprets a signed byte the most significant bit becomes a negative value not a negative sign. Rather than being the value 128, it becomes -128.

Consider this: can you have a value of negative zero? Can zero ever be negative or is zero just zero?

If zero can’t be negative and the most significant bit merely represented the sign of the value, then what does 1000000 mean? This would be -0 and that is not a number.

To make the value -1, we need to add a value to -128 to get there. If we remove the eighth bit from the byte, the remaining seven bits consist of the values: 64, 32, 16, 8, 4, 2 and 1. Sum them all together and you get the value 127. -128 + 127 = -1.

Negative one is 11111111. There's a trick with this at the bottom of this post.

Let’s do another example of a negative number. Lets shoot for -54. What number do we need to add to -128 create this number in binary?

We just need to solve this equation: x = y + z; where x = -54 (the number we want), y = -128 (most significant bit) and z is the value we need.

-54 = -128 + z
-54 + 128 = -128 + z + 128
74 = z

Set the balance of the bits to 74 and the actual value will be -54: 11001010

Signed Values


We can now fully answer the question, “What values can a given C type have?”

A single byte has 256 different values. A C type that consists of a single byte is the char type.

An unsigned char can consist of the values 0 to 255.
A signed char can consist of the values -128 to 127.

Not a Dummy


Don’t feel bad if you have to read things over a few times for it to sink in. It took me a bit to really wrap my head around this but once I did, things really fell into place.

You can cheat, too. If an 8 bit byte has 256 values then divide it by two. That value negated is the lower bound. Reduce that same value by one for the upper bound.

256 / 2 = 128
Lower bound = -128
Upper Bound = 128 - 1 = 127

Fun Fact


Binary is how numerical permissions work for the ‘chmod’ command in Linux.

Bits
Value
Permission
000
0
None - Denied!
001
1
Execute
010
2
Write
100
4
Read

Combining the bits together we get the following table.

011
3
Write (2) + Execute (1) = 3
101
5
Read (4) + Execute (1) = 5
110
6
Read (4) + Write (2) = 6
111
7
Read (4) + Write (2) + Execute (1) = 7

This is the beauty of binary and two’s compliment. You can combine any number of unique options together without overlap.

Cool Tricks


Now that you’re a pro at understanding binary, I thought I might show you a pair of nifty tricks before I leave you.

1. Want to test for whether a number is odd or even?

if (n & 1 == 0) { /* even */ } else { /* odd */ }

2. Want to test whether a signed type is negative or positive?

const char MSB_BYTE = 1 << 8;
if (n & MSB_BYTE == MSB_BYTE) { /*negative */ } else { /* positive */ }


3.  Want to set an unsigned variable to it's maximum value but don't know what it might be on a particular machine? Many languages will let you do this little trick.

unsigned char c = -1

It sets all the bits on, which is always the maximum value. Go back to the "More Maths" section to see why this works.

There are no guarantees these tricks will work everywhere so read the specification for your language and check the compiler implementation to know before using them!

If you have never done bit shifting or used logical operators than this tricks probably make no sense. Once you learn about them come back here and think about these tricks and see if you can figure out why they work.

Friday, 16 January 2015

Type Sizes and Max Values

Today, I want to take a step back and talk about something a bit more elementary.

If you've just started learning to program, understanding the value ranges of types can be hard. I hope that I can maybe impart some knowledge to you that will help you get a leg up.

In computing, we use a system called binary to count. Binary is a sequence of ones and zeros that are used to represent numbers.

We'll begin with...

Bits


I like to think of a bit like a gate in a yard; a gate that can only be in one of two states: open or closed.

An open bit is represented by the number 1. When open it allows an electrical current to flow through it. A closed bit, which does not allow electricity to flow through it is represented by the number zero.

This is the essence of binary. A grouping of numbers which have only two states.

Note: When a gate is open, it is in the “high state” (as it would be called in electrical engineering), allowing maximum electron flow through the gate. When a gate is closed, it is in the "low state".

Two’s Complement


As I just said, if something consists of a single bit, it has two states. 1 (one) or 0 (zero). So, if you add a second bit into the mix you have four possible states. Two states for the first bit and two states for the second.

All possible combinations of these two bits looks like this:

00 - bit one and bit two are both closed
01 - bit one is closed and bit two is open
10 - bit one is open and bit two is closed
11 - bit one and bit two are both open

If you add a third, you get 8 possible combinations. They are: 000, 001, 010, 100, 011, 110, 101, and 111.

Each bit you add has exponential growth for the maximum possible combinations. For each bit you have, you raise two to that power. One bit is two to the power of one. Two bits is two to the power of two. Three bits is two to the power of three and so on.

This is two’s complement.

Bytes


A byte is a specific number of bits grouped together.

Different computer architectures can have a different number of bits per byte. The Intel 4004 CPU, for example, had a 4 bit word size. The ASCII character set is based on a 7 bit byte. An 8 bit byte is most common and the remainder of this article will be based on this byte size.

We must raise two to the power of eight to find out the maximum value a single byte can represent. Two to the power of eight is two hundred and fifty-six.

Want proof? Enter the number two into any calculator. Multiply it by two seven more times. 2x2x2x2x2x2x2x2 = 256.

Looking at the progression, one step at a time: 2, 4, 8, 16, 32, 64, 128, 256.

A byte, then, can hold 256 different values but a single byte can’t actually hold the value 256. 

So, why not?

Zero is a Value Too!


Zero, is one of the 256 values that a byte can be. Consider a single bit. What are it's values? Zero or one.

When all the bits in a byte are closed it can only be one thing. Zero.

If zero is one of the 256 values a byte can be it means that the maximum value that can be held in a single byte is 255. 

Multiple Bytes


Bytes can be paired up, too, using two’s compliment. Two, four and eight are common multi-byte pairings.

Each time, we raise the power up further. A single byte is 2^8. Two bytes is 2^16. Four bytes is 2^32,. Eight bytes is 2^64.

Value Ranges of C Types


Lets look at a C type of each typical byte size.

  • char = 1 byte
  • short = 2 bytes
  • long (int) = 4 bytes
  • long long = 8 bytes

So, based on the above pairings, can you deduce what the value ranges are for each of these types?

Well, we already know how high a byte goes since it has 256 values including the zero value, so it has a range of 0 to 255.

  • A short is 2 bytes (16 bits). 2^16 is 65536 for a value range of 0 to 65535.
  • A long or int is typically 4 bytes (32 bits). 2^32 is 4,294,967,296 for a value range of 0 to 4,294,967,295.
  • A long long is typically 8 bytes (64 bits). 2^64 is 18,446,744,073,709,551,616 for a value range of 0 to 18,446,744,073,709,551,615.

All you need to know is how many bytes a type consists of and you can work out the value yourself!

Stay tuned for more...