## Archive for the ‘**H++ Compiler**’ Tag

It’s been a while since i published my last post. This time, I’ll start with a little post, that will become the first of a sequel of solved problems in H++.

This time, I’ll show, two problems from the Stanford Mathematics Problem Book. This is a little book, full of very interesting math problems. I highly recommend you to get this book, if you like to find solutions to problems like this and others very common in mathematics. This is my own copy of this book.

Stanford Mathematics Problem Book

**Solving the Problems using H++**

**First Selected Problem: Problem 48.1**

Because actually, my time is short, I selected two problems for this post. The first one, I’ll discuss here, is the problem on page 7, where you have to determine the mathematical formula in both sides of the equation; the left side, is a series; the other side, is a simple cubic equation:

Problem 48.1 on Page 7

**The H++ Solution**

First, I implemented the iterative power, just for the purpose of testing the H++ compiler:

Power code

Then, I implemented the actual solution, by a function who prints, the terms of the series:

Problem 48.1 Solution in H++

Then, I called this function, after instantiating the class where I coded this function; this is the entire code:

Invoking the Solutions Functions of the Problems in this Part

**Second Selected Problem: Problem 54.1**

And this is the other selected problem I decided to implement a solution from H++:

Problem 54.1 on Page 13 in the Standfords Problem Book

And this is my solution, using H++ (you can try it in other programming languages):

Solution for Problem 54.1 on Page 13

**Other Algorithms implemented in H++**

Other algorithm I decided to implement, to show in this post, is the Divide-And-Conquer version of the Fibonacci Algorithm:

Fibonacci Algorithm

**The Output**

This is the compilation process’s output:

The compilation processs output

**The Results for n = 10**

And this is the result/solution, for n = 10:

The output

In the next post, I’ll be working on more complex algorithms, to show the power of H++. See you next!

As promised, the post that will reveal the details of the support of IEEE 754 in H++.

If after reading this post, you think you will need a more in depth discusion of IEEE 754 and its relation with the programming languages, please read Michael L. Overton’s excellent book called: Numerical Computing with IEEE Floating Point Arithmetic. A revision of the current IEEE standard can be located in IEEE 754: Revision of 2008.

Numerical Computing with IEEE Floating Point Arithmetic

As Overton explains in his book, the support for IEEE 754 and floating point in general has been inconsistent in both the hardware (the cpus), and finally the software (the programming languages). Programming Languages like C/C++ and Fortran are among others the ones with full support for the IEEE 754 specification. But that was not this way from the beginning.

The Floating Point support in H++ is very simple: I just have float and double types both with the same size and precision: 64 bits (1 bit for sign, 11 bits for the exponent, 52 bits for the significant or the mantissa). This was made this way just for simplicity when implementing the H++ Code Generator for x86 32-bits Assembly.

So, you can declare local or static variables with the keywords **float** and **double, **and will have the same double precision known from the double data type. In the future, that will change to reflect the standard: float will have 32-bits and double 64-bits of size.

Some of the floating point support tests I was writing in sample H++ programs, are provided as C programs in the Michael L. Overton’s book; if you want to do the same testing for C/C++ programs, please visit the links : Programs in this book.

Another lack of feature in H++ is in the print format; H++ supports only Fixed Format, which is very restrictive, but it works. The Exponential Format is the missing feature actually in my implementation. Here are examples of what I’m saying:

Fixed Format: 0.000542368921 (the only print format supported in H++ actually)

Exponential Format: 5.42368921E-4 (a more easy to handle print format, also known as Scientific Notation)

I’m thinking in dedicating a couple of days to implement that format, but I really don’t know when I’ll start to make it.

**Testing the H++ Compiler**

One of the tests you can find in this book, is the reciprocal power of 2, like in this H++ program:

Calculus of Reciprocal Power Of 2

This little program outputs the reciprocal powers of 2; so one detail I have to explain that shows the differences of this implementation with the original found in Overton, is that the condition was made against this tiny number: 0.00000000000000001 and not against 0.0 as the original one; the reason is simple: because I just implemented the Fixed Format for output, all the number below this 1.0E-17 are all printed as 0.0 until x really underflows.

The output:

Resuls of Reciprocal Power of 2

The final results are:

The problem with Fixed Format in H++

After the power 2^-55 and down, the number printed is zero; if the number would be 0.0 in the while condition expression, the results are all zero from here until it really underflows to 0.0; the reason is the precision I’m using: 64-bits of precision.

A test that passed Ok, was the one involving 1.0 added to the reciprocal powers of 2:

Calc One Plus Reciprocal Power Of 2

The results were as expected:

Results passed Ok for 1.0 + 2^x

This phenomena is very interesting because when you have isolated the reciprocal power in H++, it does not behave very well; but when you add it to 1.0, the results are as expected.

One interesting test, is the one related to the approximations as explained in Overton; here is the test H++ program:

Find One From Quotient

As you can see from the formula, 1/ (1/x), the result should be X in all cases:

One Div Reciprocal X Results

This test was successfull in H++. Here is why this test is very important: The binary representation for 1.0 and 0.99999999 are totally different besides they are the same value; let’s see these numbers as 32-bits float values in their binary format:

**Short Real sign exponent mantissa hex-value**

1.000 = 0 01111111 00000000000000000000000 0x3F800000 <—- 1.0

1.000 = 0 01111110 11111111111111111111110 0x3F7FFFFE <—- 0.9999999

As you could see, the number and its approximation have different binary representations; but if you continue adding more digits to the approximation, the process of rounding will take place here, and the number finally will change from .9999999…. to 1.0; you just have to add another digit 9, to convert the 0.99999999 in 1.0:

**Short Real sign exponent mantissa hex-value**

1.000 = 0 01111111 00000000000000000000000 0x3F800000 <—- 1.0

1.000 = 0 01111111 00000000000000000000000 0x3F800000 <—- was once 0.99999999

If you want to have an in depth explanation of this phenomena, please read Overton’s book: Numerical Computing with IEEE Floating Point Arithmetic.

**Fractions with Expressions of Zero Result, as the Divisor**

Another interesting subject, is the one that relates to fractions with zero as divisor, one operation which is invalid in mathematics.

It’s well known that in an expression with fractions or divisions, where the denominator expression would result in zero, cannot be resolved inmediatelly; so, from Calculus, you know that some limits must be approximated, when they cannot be solved by simple substitution of x in the polynomiuns.

They are a lot of tools in calculus to solve that problem, and one is approximation; if you cannot use 3 as value of x, you could use values from the left like these: 2.4, 2.5, 2.7, 2.9, 2.95, 2.98, 2.995, 2.9995; also you can use values from the right, like these: 3.0000099, 3.00009, 3.0001, 3.001, 3.01, 3.1. Also, tools like L’Hopital’s rule, is well known to be applied in this cases, to avoid the approximation or numerical analysis.

But, one of the features of the IEEE 754 is that you can have two different approaches: allow this result to become infinite, or throw an exception, following the rules of Division by Zero.

In the following example, what you can have as result, is zero; that will help you understand the problem in your formula, or in your program:

y = 1/x; //this will result in y = +oo|-oo (plus|minus infinite)

Based on the first approach, that expression would result in +oo (plus infinite) or -oo (minus infinite); so, if you continue operating with this value, the result could be infinite. If you then do this to the previous result:

z = 1/y; //this will result in z == 0.0

The new result would be zero.

The second approach, is to throw an IEEE 754 Floating Point exception called: Division by Zero; so you’ll need to catch the exception to understand the problem, and avoid the program to terminate unexpectedly:

**try**{

y = 1/x; //this expression causes the exception when x == 0.0;

z = 1/y;

}**catch**()

{

printf(“Division by Zero has ocurred!”);

}

Here is an H++ program that shows the results of the Total Resistance, for different values of R1 and R2 (the two resistances in the circuit):

Total Resistance in H++

To get the programs in C, please visit Michael L. Overton’s site.

The output of this little program is:

Results of Total Resistance in H++

As you can see, when R2 == 0.0, the total resistance in the circuit is 0.0. That means that:

Total Resistance Formula

More advanced tests like Approximating a Derivative by a Difference Quotient, can be found in the Overton’s book. Also, a phenomena called Cancellation is well explained in that book, so I recommend you to read it to have a better understanding of the issues involved with Compilers and the expected results.

Here is the output:

Approximating a Derivative from A Diff Quotient

This is the H++ program that generates that output:

ApproxADerivByDiffQuot Program in H++

**Approximating the Euler’s e Constant**

Let’s do another test: let’s find Euler’s e constant in an H++ program:

Approximating Euler e Constant

As you observe, the result depends on the size of n; when n–>oo, e will have a more accurate value; here is the result:

Approximating Euler e Results

You can see that, when I applied the Natural Logarithm to the value of e I found with n = 10E+8, the result was close to 1.0; the approximation was very good for that value of n.

Now, let’s do the same thing, but this time, using the Taylor series formula:

Approximating e using Taylor Series

Now, making a few changes to return the e value from each function, and compare the cancellation digits, to determine how many digits are correct in the two aproximations, we get these results:

Approximating e Methods

Now you can notice that the two aproximations are valid for the first 6 digits after the decimal point. In this example, the more accurate was the Taylor Series formula.

**H++ Floating Point Math Library**

As a explained in older posts, the Math library supports almost all the fundamental functions, primitives and intrinsic functions; most of them were implemented at the x86 assembly level, to leaverage the existing implementation in the Floating Point Unit (FPU) in the x86 microprocessors.

**Fundamental Math Functions**

Here are some of the implemented functions:

Hpp Math Lib - Part I

Most of these functions are already implemented in the FPU.

Hpp Math Lib - Part II

You can also note the constants that are already computed in the FPU, and the most common Trigonometric Functions.

I also implemented the y^x function, and e^x:

Hpp Math Lib - Part III

**Inverse Functions**

Hpp Math Lib - Part IV

Here you can see, how I implemented the Inverse ArcSin in x86 Assembly Language, for the H++ Library:

Implementation of Inverse ArcSin Function

**Hyperbolic Functions**

Finally, the other more advanced functions, I decided to implement them directly in H++ because of matter of time:

Hyperbolic Functions

This code is compiled and is available in the H++ library for any math operations that require them.

**Inverse Hyperbolic Functions**

Hyperbolic Inverse Functions

Some aliases you can use, without referring to the full name of a static function for these hyperbolic and inverse hyperbolic functions, was defined:

Hyperbolic and Inverse Hyperbolic Function Shortcuts

**Having implemented these later functions in H++ saved me a lot time, which I dedicated to finish some important issues including bug fixes.**

**Conclusion**

The full support of IEEE 754 is still very distant in H++, but as you could see, there is a lot of work already done in the H++ compiler, and fewer things are missing. I highly recommend you to read Overton‘s book: Numerical Computing with IEEE Floating Point Arithmetic, if you are serious about Floating Point and the IEEE 754 Standard.

I hope this post can help you understand the basics of Floating Point, and the support in H++.

Bye, bye.

In this new post, I’ll continue showing some of the innards of the H++ Assembly Generated code, as I stated from the Part I.

**A H++ For-Statement in Assembly Code**

This is how a for-statement looks like:

For-Statement in Assembly Code

You can also note the access to a floating point value from the array of doubles, using the subscript index variable called i.

**An H++ While Loop**

Now, this is how a while statement looks like:

While Statement

In the future, the code will be more compact, but first I have to dedicate time to finish the Advanced Compiler Design Book, and read another book on Optimization issues.

**Floating point Comparison in an If-statement**

The floating point support in H++ is very simple: only double precision is supported for now, which means you have variables of size 8 bytes, which also means more precision and more accuracy; the comparison is made in the FPU but through a call to an implicit function called **Compare** in the **FloatingPoint** class:

Floating Point comparison in H++

Some issues I found while developing the H++ code generator has to be with the comparisons in general in the condition-expressions within the if-statement, for-statement and while-statement; the problem arised because my idea was to have the **bool **data type to be returned from any bool expression, and the result always in EAX as a byte in AL, which could then be assigned to a bool variable from the same computation of the expression; so the value can flow from the result to the assignment as 1 for true, and 0 for false. In the process of doing that in my way, I added a lot of code. In the future I pretend to change that to a more clever solution.

This is a more clear example of comparing two double variables:

Comparing Two Array Items

To Access each element from the array, is the necessary the code above in the frames; then occurs the call to **FloatingPoint_Compare**, and finally the** if-statement** comparison is done.

**Array subscript Assignment**

This is what it takes to assign values to array subscripts in H++; the array of doubles is accessed and then assigned the value:

Assignment to a Double Array

**Another Example of Dynamic Cast and Property Calls**

In the following picture, you can see another example of the use of the operator dynamic cast, and finally the use of H++ properties:

Dynamic Cast and Property Call

**Global Variables and String Constants**

The following picture, shows how global variables and string constants are generated in the Data Segment for an H++ Program:

Global Strings

Also you can note constants that won’t need space in the final executable.

**VTables in H++**

The vtables in H++ works similar to those generated by Visual C++ compiler; they are defined in the data segment in the Code Generation process of the H++ compiler:

Vtable Definitions in H++

Also, you can also look at vtable of the **Array** class, which is an abstract class; the members with the __purecall, are abstract members which don’t have implementation until concrete classes like **ArrayofAutomobile**.

Another example is what happen with the **QuickSort** abstract class, which can have different implementation of the selectPivot member, based on the approach to select the pivot element for reference; the **MedianOfThreeQuickSort** class is the one that has a selectPivot implementation, and defines itself a concrete class; see the picture below:

Vtables from Concrete and Abstract Class

Ok. I think I have introduced you to the innards of the H++ assembly generated code. I hope that these series of posts could be helpful for you, if you are attempting to create a Compiler that generates Native Code!

Nowadays, compiler writers are focusing in Virtual Machines, but at the end of the journey, they are similar concepts, but virtuals!

In the next post, I’ll be showing you some floating point testing programs that can be used to test any compiler that supports floating point operations.

See you in the next post!

**Does how the H++ 32-bits Assembly Generated Code looks like?**

The generated code is very simple; it looks like a simple assembly program:

Assembly Generated Code in H++

You can also note a dependency with the H++ library include file hcclib32.inc, which defines all the prototypes of the implemented functions of library as described in earlier posts.

You can also see in the code fragment above, that there is also code forcreating a byte pattern with CCh, for local variables used for determining which local variables were not initialized when a bug may appear in a program. This approach is heavely used in some C++ compilers when programs are compiled in Debug mode.

To activate that byte pattern for local variables in H++, use the /GZ compiler switch.

The main entry point that the operating system really sees, is called __System_Hpp_Runtime_Init; this is the function that is invoked when the process is created for TestSorting.exe compiled file. The next picture, shows how it looks like:

H++ Runtime Init Procedure

As you can see from the picture above, the H++ runtime initialization procedure is a very simple entry point, which calls your H++ program public static void main() function.

As with every assembly language file compiled to have an exe as a result, the entry point is specified in this case, at the end of the compilation unit or asm file:

H++ Startup Function

**H++ Translated Language Statements**

As you’ve noted, in the assembly code, some parts of the code are annotated, like in the following **H++ if-else** statement:

If Statement and Assignment

To have annotation in the results, you can use the /A switch on the H++ Compiler.

This code is actually very verbose, but in time it will be less code as I improve the H++ Code generator; but that will require more time to spend on this project, which for me actually is a little complicated in this days.

**H++ object constructors**

The constructors are generated mostly like in the next picture:

An H++ Object Constructor

As you’ve noted, this is the constructor for the BinaryInsertionSort H++ class created for previous posts; this constructor call the base class constructor, and finally initializes the vtbl pointer with the expected.

Another look at a constructor, looks like in the next picture where you can see the dynamic memory allocation using the** H++ new **operator:

A Constructor Initialization and Dynamic Memory Allocation

In the previous picture, you can note an initialization of a pointer to an array of **Proxy::Automobile **objects from the previous post.

**H++ Destructors**

This is how looks the destructors; also you can note the deallocation of heap memory using the **H++ destroy** operator:

Destroying H++ Objects Virtual Destructors

You can see how I implemented the** H++ destroy[] operator**, for calling the destructors of all the objects in the array pointer, and finally you can see the real destruction of the heap memory, returning it to operating system back again.

**Calling Non-Virtual Functions**

The non-virtual functions are easier to identify in the code, because the objects don’t use the vtable to obtain a pointer to the function to call; what I mean is that these calls are direct calls to the functions generated in the assembly code:

Calling a Non Virtual Function

As you can see, the function** siftDown**, of the **HeapSort** algorithm object, is called directly in the code above. For more information about the H++ source code of these ones, please visit this post about H++ Sample Sorting II in this blog.

**Calling Virtual Functions **

This case, the calls are made through a pointer from the vtable; this is how a virtual call looks like:

Calling a Virtual Function

Actually in this picture, you can see the call to the **SwapValues** virtual member function, which in this case has the offset +4 f rom the vtable which was set in the EDX register before finally making the call.

As you can see, is pretty similar as how the Visual C++ compiler generates the virtual calls.

**H++ Properties and Property Call**

One very useful feature I wanted to have in my language, and implement in the compiler is the support for properties; in this version of the H++ Compiler, I implemented the properties as Non-virtual functions; maybe in the future I may decide to support virtual properties, but I don’t think right now that it could be useful at all as virtual member functions.

The H++ language supports both getter and setter accessors for properties; so you can have code like this:

{

Circle circle;

circle**.Radius = **30.0;

circle.ComputeArea(); //the result will be accesible through the read-only property Area

printf(“The Radius is = “, circle**.Radius**, “, and the Area is = “, circle**.Area**);

}

**H++ Property: Getter Accessor**

This is the result for accessing the getter accesor function of an object:

Calling getter accessor Properties

For this case, you can see the access to the Model, Year and Cost properties of the Automobile object.

**H++ Property: Setter Accessor**

Finally, you can see the setter in action, which allows to change the internal state of the class through this calls:

Calling setter accessor Properties

In the next post, I’ll continue explaining some more advanced features found in the generated code for the TestSorting.hpp H++ file.

Bye, bye!