NerdKits - electronics education for a digital generation

You are not logged in. [log in]

NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.

Microcontroller Programming » Using concise fractions to save time and memory in microprocessor applications

August 31, 2010
by bretm
bretm's Avatar

This isn’t a question, it’s more along the lines of a tutorial.

Floating-point support takes up a lot of valuable space on the microcontroller and floating-point routines are slow. It’s common to optimize size and speed of microcontroller apps by using fixed-point approximations to real values. For example, you might represent numbers by integers that are 10000 times the actual value. In this scheme, pi would be represented by the integer value 31416. Multiplication could be done just by multiplying the integers and then dividing by 10000, e.g. sqrt(2) times pi would be 14142 x 31416 / 10000 = 444285072 / 10000 = 44429 (rounded), which represents 4.4429. This is a great performance and memory trade-off if you don’t need the 7 digits of precision that the float type gives you.

But fixed-point representations, which use a fixed denominator such as 10000 for human readability or 65536 for performance reasons, aren’t necessarily the most accurate option. Consider the fraction 239/169 as an approximation for sqrt(2). This representation has a value of 1.41420118… which has almost 5 decimal places of accuracy when compared to the actual value 1.41421356…., and it still only requires two bytes to represent it. Multiplying an integer (which itself might be a fixed-point representation of another number) by sqrt(2) can be done by multiplying by 239 and then dividing by 169. This is more accurate and takes fewer clock cycles than multiplying by 14142 and dividing by 10000.

But how do you choose the best fraction to represent a given real number? Many a programmer’s first instinct would be to use as much precision as possible in the numerator or denominator by choosing the largest value in the available range. For example if you want to limit your numbers to 256 you might think that choosing 256/x would be the most accurate approximation for sqrt(2). In this case you would end up with the approximation 256/181, which is accurate to not-quite 4 decimal places. (It’s not as accurate, but it’s faster because multiplying by 256 can be represented by an implicit byte-shift.)

Another approach would be to try every possible numerator in a given range and choose the result that gives you the most accurate answer. In this case if you tried 256, then 255, then 254, etc., you’d eventually find 239/169, but then you’d have to try a lot of other ratios before you could be sure that there wasn’t a more accurate one.

Here’s an algorithm you can use to find the "best" approximations to a given real value. In this case "best" means the most accurate for a denominator of a given magnitude. You can do this by hand, but a spreadsheet makes it a lot easier. Start by setting up four columns, starting with the following setup:

      A        B    C    D

                    0    1
                    1    0
2.302585093

Your number goes in cell A3. In this case I chose the natural logarithm of 10. The numbers 0, 1, 1, 0 go into the top of columns C and D as shown.

Next, take the whole part of your number and put in in column B.

      A        B    C    D

                    0    1
                    1    0
2.302585093    2

Now, multiply the C and D values from the previous row by the new value in column B, and then add the C and D values from two rows up. In this case we have 2 times (1,0) equals (2,0), and then add (0,1) to get (2,1).

      A        B    C    D

                    0    1
                    1    0
2.302585093    2    2    1

On the next row, calculate 1.0 divided by the fractional portion of the previous real number. In this case we want 1.0 / 0.302585093 which is 3.304855471, and put it on the next row:

      A        B    C    D

                    0    1
                    1    0
2.302585093    2    2    1
3.304855471

Now repeat the previous steps for columns B, C, and D. Put the integer part of the new number in B. Multiply the previous C and D by the new B and add the C and D from two rows ago:

2.302585093    2    2    1
3.304855471    3    7    3     <-- 3 x (2,1) + (1,0) = (7,3)

Keep doing this a few more times, and eventually you’ll get better and better fractional approximations in columns C and D. After a few more rows you see the approximation 175/76 which has more than 4 decimal places of accuracy. Add a few more rows and you’ll get increasingly accurate answers at the cost of higher precision.

3.280242920    3   23   10
3.568332792    3   76   33
1.759532467    1   99   43
1.316599413    1  175   76

I chose logE(10) instead of sqrt(2) for this example because square roots have an interesting property that would have made it a confusing example. When you apply this algorithm to them, the numbers in column B eventually start to repeat. For exact rational numbers they go to zero. For many irrational numbers they continue unpredictably forever. For some irrational numbers they don’t repeat but they are completely predictable. Here’s how the table would look for sqrt(2):

                  0    1
                  1    0
1.414213562  1    1    1
2.414213562  2    3    2
2.414213562  2    7    5
2.414213562  2   17   12
2.414213562  2   41   29
2.414213562  2   99   70
2.414213562  2  239  169

Column A and B stay at 1+sqrt(2) and 2 forever. And there’s the 239/169 I used earlier.

Here’s the table for “e”, the base of the natural logarithm:

2.718281828  2    2    1
1.392211191  1    3    1
2.549646778  2    8    3
1.819350244  1   11    4
1.220479286  1   19    7
4.535573476  4   87   32
1.867157439  1  106   39
1.153193129  1  193   71

The fraction 193/71 is accurate to almost 5 decimal places. The values in column B continue on: 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, ... See the pattern?

Cutting the list off just before a large number in column B provides a particularly excellent approximation for a given precision. For example, here’s the table for pi:

3.141592654    3       3      1
7.062513306    7      22      7
15.99659441   15     333    106
1.003417231    1     355    113
292.6345909  292  103993  33102

If we stop at 355/133 we get a number that is accurate to almost 7 decimal places! You won’t find a better approximation until you get into 6-digit numerators. Including the next row gives us a fraction that is accurate to one part in 5 billion, but at the cost of more storage and computation time if we were to use those constants in a program.

The series of numbers in column B is called the continued-fraction representation of the number. For pi, it is written as [3; 7, 15, 1, 292, ...]. For e it is [2; 1, 2, 1, 1, 4, 1, 1, 6, ...]. It is short-hand for a "continued" fraction in the form a+1/(b+1/(c+1/(d+…. For example, [3; 7, 16] is equal to 3 + 1 / (7 + 1 / 16), which is also 355/113. (I used a trick there—if the last number in a list is 1, e.g. 3; 7, 15, 1, then you can discard the 1 and add it to the previous number.)

The series of ratios 3/1, 22/7, 333/106, 355/113 are called the convergents of the continued fraction, and are guaranteed to be better approximations than any fractions with smaller denominators.

For more information, see Continued Fraction on Wikipedia.

August 31, 2010
by wayward
wayward's Avatar

Wow! Great write-up, bretm. I was actually curious myself how these fractions were calculated, but had always been to lazy to search for the answer myself. Thanks!

Here's also a somewhat related, iterative algorithm to approximate the square root of a real number _N_.

  1. Guesstimate the starting a_: any value from 1 .. _N .
  2. Divide the number with your answer: d = N/a .
  3. Check how close you were: if d_ and _a they differ by less than some _ε_, you're done.
  4. Refine your guess: a_ = ( _a + _d_ ) / 2 and repeat from 2.

It may not be the fastest way, but it's simple and handy to know. Also see Wikipedia for more numerical methods of calculating square root and related functions.

August 31, 2010
by Ralphxyz
Ralphxyz's Avatar

Excellant, it will take a bit to digest but thank you.

Ralph

August 31, 2010
by bretm
bretm's Avatar

That square root calculation is a special case of Newton's method for solving roots of equations, and it's a popular technique in many math libraries because it doubles the number of digits of precision you get every time.

For example, if you're calculating the square root of 2 and your current guess has the first 5 digits right, e.g. 1.4142, performing a single iteration gives you 10 good digits and two iterations gives you 20 good digits. In this case you get 1.414213562(43) after the next iteration and the real answer is 1.414213562(37).

September 01, 2010
by Ralphxyz
Ralphxyz's Avatar

I hope this leads to more conversation and learning, but apologies beforehand for some might find this post offensive.

75 - 80% of everything I know about electronics and computers come from online forums, I have had a rather large library of computer books (I have spent $400.00 a month at Amazon, often) but most of the book reading never really registered because I had very little actual practical application need or some where in the "sample" code something didn't work so that ended that quest because I could not get help from the author or discussion groups.

So I have read a lot of books on programming (from GeeWiz Basic, VB, Logo, C, C++, C#, JAVA, Prolog to APL and probable a few more, now I am getting ready for Assembler).

All of them have emphasized the features and shortcomings of their number handling capabilities especially it's precision.

Now, maybe it is the Luddite in me but I have always asked myself after a discourse on number accuracy to the most infinitesimal point, WHY? Why is this precision needed or why is the lack of precision beyond a hundredth ($0.00 I do deal in money) needed.

Now please bear with me on this and remember I did not have a teacher with whom I could search out and explore the wonders of mathematics and to be inspired by to accomplish things in life, or really anyone to just talk with.

I have dealt a lot with warehousing and delivery logistics where we warehoused items, rarely fractional parts. We delivered packages, if we delivered a partial package we were in trouble. Unless I think about it I really do not know of dealing with anything in the real world that required greater than a thousandth degree of accuracy. Well I did work for a micro-precision bearing company building Redeye Missiles guidance gyros and they probable had a very critical accuracy requirement, but the chief machinist used to crank out the body of these gyros on this little bench lath so I wonder just how accurate they were.

Now that I am learning electronics I can see where in circuit design if using a 33pf capacitor I might need some precision in my circuit analysis, or then again I could just try it and see if it works.

To me in my ignorant ways I "sometimes" equate all this concern with precision of numbers with the lesson most of us have learned from programming "just because you can do something does not mean that you should". I mean if someone had not taught you that precision of numbers was important, would it some how have manifested itself to you as being important, or are you just a reflection of what you have been taught? You were taught this so you promulgate it for the next person to carry on or do you actually use this precision. And if so is the precision really needed. I suppose if you are a rocket scientist launching a mission to Mars you might need precision but could some of that process be satisfied with "close enough", I know getting there is critical but how many calculations are involved in going up and making sure you were in motion?

Again I apologize for the baseness of this post but I think "why" is a legitimate question. Especially from someone such as myself who has no formal learning. Please don't rant, well a few snide comments might be appropriate. I really do not know why this precision of numbers or what it means to me in a practical sense mean to my life.

bretm, once again thank you for the excellant, provoking tutorial, I really love to see this kind of information in the forum, and I hope there will be lots of follow up conversations and hopefully more such articles.

Ralph

September 01, 2010
by bretm
bretm's Avatar

"Why" is a perfectly valid question. The answer in this case, and it's not meant to be sarcastic, is "why not?"

This issue arose, for me, when looking for space-saving solutions to get a program a few percent smaller to fit onto the microcontroller. The program used fixed-point math in some cases. It one point there was a multiplication followed by a division by 10000, which represented multiplication by a fixed point constant. There were a number of these constants in the program.

I already knew an easy technique to improve such approximations, so why not use it? Why not save time and space by multiplying by a byte value and then dividing by a byte value instead of using two-byte values, when you get the same or better accuracy, for free?

That was the main point that was admittedly buried in a rather long post.

The question remains, if you don't need the extra space or time, why bother with such optimization. I would say if the code is already written, and it works, and doesn't need to be optimized, then don't optimize it. There's not a good reason. But if you have this technique in your set of tools and are writing new code, I would again say, why not?

September 01, 2010
by wayward
wayward's Avatar

Well, we are dealing with miniature computers here, so our programs will likewise have to be minimal. Which generally implies that our knowledge of miniaturization needs to maximize. :)

September 01, 2010
by Ralphxyz
Ralphxyz's Avatar

Excellant, learning more on every reply.

Thank you. For me a discussion in a forum is such a better way of learning than reading it in a book. The "Why Not" makes sense and wayward's point of miniaturization makes a lot of sense in application.

Ralph

September 21, 2010
by jbrad
jbrad's Avatar

Here is a python script I wrote to implement the approximation bretm described above,....

why? ........ why not!

#FracApprox.py
#concise fractional approximation of a Real Number
#
import os, sys
def usage():
    print "Oooops, at least one input is required!\nUsage:\n"
    print "\tFracApprox.py <number to approximate> (optional maximum limit for numerator, default 256 ) (optional exponent for max numerator)"
    sys.exit(1)
print sys.argv
if(len(sys.argv)<2):
    usage()

target = a = float(sys.argv[1])
i = 256 #default max value for c
if(len(sys.argv)==3):#input for max numerator(default is 256)
    i = int(sys.argv[2])
if(len(sys.argv)==4):#exponent for max numerator(default is 1)
    i = int(int(sys.argv[2]) ** float(sys.argv[3]))
j = 0 #iterative counter
c=[0,1] 
d=[1,0]

while(c[0+j+1]<i):
    try:
        b = int(a)
        c.append(c[1+j]*b+c[0+j])
        d.append(d[1+j]*b+d[0+j])
        if(c[j+2]>i):
            break
        print a,b,c[j+2],d[j+2]

        a = 1.0/(a-b)
        j+=1
    except:
        break

try:
    print "Given a maximum numerator of %d " % i
    print "The Fractional Approximation of %.11f is %d/%d = %.11f\n" % (target,c[j+1],d[j+1],float(c[j+1])/float(d[j+1]))
except:#for the trivial case of an integer input or some other screw up
    print "........Holy Ramanujan, Batman!!!",
    print "\n the Fractional Approximation of %f is %d/%d = %f\n" % (target,c[j-1],d[j],float(c[j-1])/float(d[j]))
sys.exit(0)
September 21, 2010
by bretm
bretm's Avatar

That's great. I've just started learning Python so that was a good program to experiment with. I was investigating using fractions instead of floating point and ran across this (with parentheses for Python 3.0's print):

import fractions
fr = fractions.Fraction(sys.argv[1])
print(fr.limit_denominator(int(sys.argv[2])))

The ratios aren't always continued-fraction convergents so it's a different algorithm. It can end up with results that are in-between two steps of the continued-fraction algorithm. Technically that increases the ratio of the error to the number of bits in the fractional representation, but only when measuring fractional bits and so isn't a practical issue.

So in Python 2.6 or later I'd use that.

Post a Reply

Please log in to post a reply.

Did you know that NerdKits has a TV commercial, seen on MythBusters and the Science Channel? Learn more...