NerdKits - electronics education for a digital generation

You are not logged in. [log in]

NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.

Project Help and Ideas » timer interrupt problem

March 26, 2014
by sask55
sask55's Avatar

I am having a problem achieving consistent results using a timer interrupt I have set on my project. I have three micros set up as a SPI slaves. Each of these slave chips directly controls the pin state on a stepper motor controller chips input pins, as well as receiving feedback from output pins on the motor controller chips. The slave chips receive control bytes and send motor feedback information to a master chip by SPI. The master chip in the SPI setup is receiving control information from a PC and directing it to the appropriate slave as well as passing feedback data from the motor control chips back to the PC using a serial USART connection. The rotation direction, motor speed and sequencing order of the stepper motors are controlled from the PC. The time interval or frequency of the stepper motor CLK pin pulse will determine the motor speed and will vary greatly for each individual motor depending on a number of factors. My setup is working well delivering motor control bytes from the PC to the Motor controller chips. The actual frequency of the SPI data sent to the individual chips is controlled using a timer interrupt on the Master chip. This pulse timing interval is set from the PC by controlling the value of the output compare register value on the master chips timer interrupt. The setup has a frequency range from 0 to about 5kz which is more then adequate for this project.

Each slave is receive a stream of incoming control bytes from the master chip to contol the stepper movements in real time. The stepper motors will often be required to rotate a different speed and in a specific stepping order. Motor speed, direction of rotation, torque levels are changed often by control bytes arriving from the PC command stream. Each command byte contains a number independent bit level state to be set on the motor controller chip and arrive at unpredictable intervals. According to the data sheet for the motor controller chips a 30 us (micro second) pulse on the motor controller CLK input pin is sufficient to trigger a motor step. The highest CLK pulse rate that I require is about 5 kz. Therefore the shortest interval between incoming command byte is in the order of 200 us. I have set up each slave with a SPI interrupt to receive the incoming SPI RX command bytes from the master chip. The software on the slave then check to see if the new command byte has set the motor controller CLK input pin high. If the CLK pin is set high a timer interrupt is started on the slave. After a time interval long enough to trigger a motor step on the controller the slave will pull the CLK pin low and stop. This state high pulse on the CLK pin is not critical as long as it is safely higher then 30us and not so long that it could interfere with the next possible command byte at 200us. I am using a 86 us pulse width.

Ok so that is what I am doing and hear is my problem.

The slave pulse high timing interval is unreliable. It is often very short. My motor controllers are missing steps and I believe it is because the slave chips are occasionally pulling the CLK pin back to the low state to quickly. This conclusion is a result of my observations on my scope. It appears that most of the time the high pulse is timing is correct but occasionally it is very fast. Too fast for the controller chips to recognize the pulse. I am using the timer in the CTC (Clear Timer on Compare Match mode). My understanding is that in this mode each time the count register reaches the OCR0A byte value the interrupt is fired and the count register is reset. I have also tried resetting the TCNT0 timer register to 0 in code before starting the timer. I have attempted two methods to stop or disable the timer interrupt between pulses. I first attempted to disable and later enable the timer interrupt by setting or clearing the OCIE0A bit on the TIMSK0 register. That approach did not seam to work. Currently I am controlling the timer interrupt by setting the CS01 and CS00 bits on the TCCR0B register. By selecting clearing both CS01 and CS00 bits to stop the timer counter or by setting both CS01 and CS00 bits to select a 1/64 prescaler. I have not bean able to determine why some seemingly random pulses are very short while most are as expected.

Scope screen shot of a fast rate 4 pulse test where everything appears correct there are no problems. Then scope measurements are 139us pulse rate (~7.2khz). 85us high state period and 54us low state period.

screen3

Scope screen shot of a slower rate test showing 6 pulses. It may not be clear but the second pulse is very short, too short to attempt to measure at this sweep rate. Then scope measurements are 1.79ms pulse rate (~580 hz). ~70us high state period and ~1.72 ms low state period.

screen2

Scope screen shot of the same slower rate test showing 2 pulses. Even at this sweep rate the second pulse is very short much less then 30 us required by the controller chips.

screen 3

This is the portion of the code on the slave chip that deals with the times interrupt and the SPI interrupt.

 void spi_slave_init(void){
//disable the uart module
 UCSR0B = 0;
   //  slave SPI pins
 DDRB |= (1<<PB4);
 DDRB &= ~(1<<PB3);
DDRB &= ~(1<<PB5); 
DDRB &= ~(1<<PB2);

sei(); // enable global interrupts
   // enable SPI enable spi interputs
SPCR |= (1<<SPE) | (1<<SPIE);
   }
 void clk_pulse_setup() {
  // setup Timer0:
  // CTC (Clear Timer on Compare Match mode)
  // TOP set by OCR0A register
  TCCR0A |= (1<<WGM01);
 // clocked from CLK/64
 // which is 14745600/1024, or 230400 increments per second
 // 4.34 us/ increment
 //                                      TCCR0B |= (1<<CS01) | (1<<CS00);
 // set TOP to 19
 // because it counts 0, 1, 2, ... 18, 19, 0, 1, 2 ...
 // so 0 through 19 equals 20 events
 OCR0A = 19;
 // enable interrupt on compare event
 // clk puls high width = 20 * 4.34 = 86.8 us
 // set TIMSK0 |= (1<<OCIE0A) to enable clear TIMSK0 &= ~(1<<OCIE0A)to  disable interupt;
 TIMSK0 |= (1<<OCIE0A);
 }
ISR(SPI_STC_vect){
redu_current= (SPDR & (1<<PD5));
PORTD = ((PORTD & (1<<PD5)) + (SPDR & ~(1<<PD5)));
if ((PORTD & 1) != 0){ //if motor clk pin is high
    TCNT0 =0;
    TCCR0B |= (1<<CS01) | (1<<CS00); // set timer pre scaller to clk/64 
    }

    SPDR = outbyte[outcount];

        if(outcount != 0){

            outcount --;
            }

}

SIGNAL(SIG_OUTPUT_COMPARE0A) {
  // when Timer0 gets to its Output Compare value,
  // pull clk low 
  // disable timer
  // TIMSK0 &= ~(1<<OCIE0A);
  TCCR0B &= ~(7);//set timmer prescaler -No clock source (Timer/Counter stopped)
 PORTD &= ~(1);// pull motor clk pin to low state
  }
March 27, 2014
by Noter
Noter's Avatar

Maybe resetting the timer prescaler will do the trick -

    void spi_slave_init(void){
        // disable the uart module
        UCSR0B = 0;
        // slave SPI pins
        DDRB |= (1<<PB4);
        DDRB &= ~(1<<PB3);
        DDRB &= ~(1<<PB5); 
        DDRB &= ~(1<<PB2);
        sei(); // enable global interrupts
        // enable SPI enable spi interputs
        SPCR |= (1<<SPE) | (1<<SPIE);
    }

    void clk_pulse_setup() {
        // setup Timer0:
        // CTC (Clear Timer on Compare Match mode)
        // TOP set by OCR0A register
        TCCR0A |= (1<<WGM01);
        // clocked from CLK/64
        // which is 14745600/1024, or 230400 increments per second
        // 4.34 us/ increment
        //                                      TCCR0B |= (1<<CS01) | (1<<CS00);
        // set TOP to 19
        // because it counts 0, 1, 2, ... 18, 19, 0, 1, 2 ...
        // so 0 through 19 equals 20 events
        OCR0A = 19;
        // enable interrupt on compare event
        // clk puls high width = 20 * 4.34 = 86.8 us
        // set TIMSK0 |= (1<<OCIE0A) to enable clear TIMSK0 &= ~(1<<OCIE0A)to  disable interupt;
        TIMSK0 |= (1<<OCIE0A);
    }

    ISR(SPI_STC_vect){
        redu_current= (SPDR & (1<<PD5));
        PORTD = ((PORTD & (1<<PD5)) + (SPDR & ~(1<<PD5)));
        if ((PORTD & 1) != 0){ // if motor clk pin is high
            TCNT0 = 0;
            GTCCR = (1<<PSRSYNC); // reset Timer/Counter1 and Timer/Counter0 prescaler
            TCCR0B |= (1<<CS01) | (1<<CS00); // set timer pre scaller to clk/64 
        }
        SPDR = outbyte[outcount];
        if(outcount != 0){
            outcount --;
        }
    }

    SIGNAL(SIG_OUTPUT_COMPARE0A) {
        // when Timer0 gets to its Output Compare value,
        // pull clk low 
        // disable timer
        // TIMSK0 &= ~(1<<OCIE0A);
        TCCR0B &= ~(7);// set timmer prescaler -No clock source (Timer/Counter stopped)
        PORTD &= ~(1);// pull motor clk pin to low state
    }
March 27, 2014
by sask55
sask55's Avatar

Noter I added the code to resetting the timer prescaler as you suggested. It did not make any change as far as I can tell.

I have noticed that the second pulse is very often a very short pulse. I think that there are other sort pulses occurring as well but it is somewhat difficult to capture a large number of pulses on the scope screen and still have the resolution on the screen required to establish which are short

One more screen shot from the scope. Because faster command byte rates result in a higher duty cycle (shorter low state times between pulses) on the output signal it is possible to use a higher horizontal sweep rate on the scoop and still capture a number of pulses.

~1.44KHZ pulse frequency. pulse high sate ~ 90 us second pulse high state ~14 us

screen4

Thanks for the suggestion. Any other ideas would be appreciated.

March 28, 2014
by sask55
sask55's Avatar

Once again I managed to get hung up a problem that was totally my own making. It occurred to me this afternoon that there are actually two ways that the slave output pin could be pulled low in my setup. I was thinking that the timer interrupt was causing the shorter high level CLK pulses by firing at a shorter interval then I programmed. Somehow I neglected to consider an incoming command byte on the SPI rx will directly control the level of the PORTD pins. Therefore if a command byte arrives from the master where the motor CLK bit is low the CLK pin will be pull low.

This entire problem I was having has nothing to do with the timer interrupt on the slave. That has been working as expected all along. The issue is simply incorrect programming of the master chip communication protocol that I am using.

I don’t imagine anyone is really interested in the details. Basically the master is programmed to quickly retrieve 3 additional bytes from a slave chip when the slave chip periodically indicates a new digital calliper reading is available on the slave. I made logic error having the master call for the calliper reading which had the direct effect of pulling the CLK pin low.

I have it working now. It seams so obvious now. I often wander how I miss this kind of thing and get myself pointed in the wrong direction.

April 09, 2014
by JimFrederickson
JimFrederickson's Avatar

A few additional suggestions that I hope may help.
(Maybe some of these things you have already considered.)

You do have an interesting setup.

It seems your primary motivation for using multiple AVR's is to either increase your I/O
pin count and/or to distribute the control closer to where it is needed/used. (I am
only guessing.)

One thing I thing would be helpful would be to distribute the intelligence and
responsibilities more than you have.

I would view your current setup like a "boss micromanaging the employees under him"...

So change the paradigm to "give your Slave AVR's Tasks that are performed under rules
rather than micromanaging them".

From the two posts I have read regarding this, (Timer Interrupt Problem, and Electronic
Component Quality), I think too much intelligence /responsiblity is in your Master AVR.
(Often things need to be complicated for various reasons, sometimes for their own
good. :) )

Currently, it seems, your Master AVR is making all the decisions on when an
actual pulse to a motor is sent and when it is not. (Referring mostly to the Timer
Interrupt Problem post).

Rather than have the Master AVR control the "intricacies of accomplishing this" it
would be better in many ways to have the Slave AVR's do this.

So instead of having the Master AVR send a code to the Slave AVR to "pulse the motor"
create "commands for the Slave AVR to execute".

i.e.
(Assuming the Max Steps are 100 per second, so our Slave AVR Timer Interrupt will be
200 times per second. Again this is just to make it easier to think about.
Theoretical as it were. :) )

Master/Slave AVR Motor Control Commands:

Command     Action                                          Affect
1           Immediate Stop                                  Motor Stops
2           Step 1, rest 199 Timer Interrupts, repeat       Motor @ 1% Max
3           Step 1 rest 99 Timer Interrupts, repeat         Motor @ 2% Max
4           Step 1 rest 49 Timer Interupts, repeat          Motor @ 4% Max
5           Step 1 rest 24 Timer Interrupts, repeat         Motor @ 8% Max
6           Step 1 rest 19 Timer Interrupts, repeat         Motor @ 10% Max
7           Step 1 rest 9 Timer Interrupts, repeat          Motor @ 20% Max
8           Step 1 rest 4 Timer Interrupt, repeat           Motor @ 40% Max
9           Step 1 rest 3 Timer Interrupt, repeat           Motor @ 50% Max
10          Step 1 rest 1 Timer Interrupt, repeat           Motor @ 100% Max

(I just chose these so the Timer Interrupts would come out evenly. You could just send
the "steprests" as a count, rather than a speed, or some other coding as it suits you.
I personally would prefer a "true speed" as either RPM, cm/sec, something like that.)

Then in the Master AVR:  
    1-  Change your code to calculate and send the "desired speed"
        instead the "actual time to step".

In the Slave AVR change your SPI Receive to just "store the desired speed in a variable"
    1-  Keep the timer running all of the timer rather than starting and stopping
    2-  Change SPI Receive to something like this:
            Save new speed
            flg_newspeed = TRUE

        (Undertand that maybe you will want to change this so if the "flg_newspeed"
        is TRUE and there is a newspeed then there is a speed overrun/a speed change
        prior to the previous speed being processed.)

    3-  Change the Timer Interrupt Function to something like this:
        //  If we are sending a steppulse right now we need to finish that first
            if (flg_steppulse == TRUE) {
                Turn off steppulse
                flg_steppulse = FALSE
                }
            else{
        //  If we are still counting down the rest period for the last speed let's do that
                if (cur_steprest_count != 0) {
                    cur_steprest_count--
                    }
        //  We have already finished out the steprest period so
                else {
        //  Do we have a new speed, if so get it an process it
                    if (flg_newspeed == TRUE) {
                        Calculate/Store cur_steprest
                        flg_newspeed = FALSE
                        }
        //  If our cur_steprest speed is something other than 0/stopped
        //  Send a pulse and set the new steprest period
                    if (cur_steprest != 0) {
                        Turn on steppulse
                        flg_steppulse = TRUE
                        cur_stepreset_count = cur_steprest
                        }
                    }
                }

So adding complexity to your thinking, I think, gives you better
safeguards/flexibility for what you are doing.

What I would consider a huge advantage to this is that the Slave AVR can be tested with
just using 3 pins connected to buttons. (speed up, speed down, stop. One ADC Pin if
you use a resistor divider for your buttons, but then only 1 button press at a time can
be recognized unless you are careful in your resistor selection.). This testing could
be done autonomously without any need of having the Master AVR communicating or present.

Everything for the control of the Toshiba chip is contained in the Slave AVR. So no
matter what the Master AVR does it won't affect how the Slave AVR handles the motor. (Only the speeds of the motor are affected.)

Additional safeguards can be put into place:  
    1-  Exact control of the Toshiba Chip is achieved and ALL the Time Critical
        Elements associated with it reside in the Slave AVR
    2-  A Temperature Sensor could be used to monitor the Toshiba Chip
    3-  A Watchdog Timer could be setup and if there is no speed change from
        the Master AVR within some period of time the motor shuts down
    4-  Validity/Error Checking could be applied to the SPI Input Data
    5-  LED's to monitor Motor On/Off, SPI Receive, etc...
    6-  Manua Control, or overide, of the motor without the Master AVR
    7-  If you are moving motors around then the Limit Switches can be monitored
        by the Slave AVR as well.

NOTE:

The 14,745,600 cycles/sec of the AVR is a big number!

At least at first.

Sometimes it surprises me how quickly it can all go away.

In your post the Timer is running at 230,400 ticks/sec.

Then you count to 20. (0-19)
So that brings things to 11,520 interrupts/sec.

But...

What is often masked, or not appreciated, is that there is interrupt overhead from the
Hardware as well as the C Compiler.

I don't know how Opimization or Compiler Versions affect this, I don't really think
they do much, but with my setup I have the following:

Interrupt Entry
15 - 2 cycle instructions
2 - 1 cycle instructions

{ My code }

15 - 2 cycle instructions
1 - 1 cycle instruction
RETI at 4 cycles

And, finally, there is a "JMP" for 3 cycles at the Interrupt Service Vector to get
to the Interrupt Function.

So, overhead wise, we have 70 cycles for Each Interrupt!
(Which is one of the biggest reasons why programmers often use/need Assembler to have
more precise control, especially for interrupts.)

So the 11,520 interrupts/sec really becomes 806,400 cycles/sec JUST in overhead.
(Roughly about 5.5% of your AVR CPU.)

What is often a more important number though is the 14,745,600 / 11,520...

This number, the 1,280, is the MAX number of Cycles you have availabe before this 1
Interrupt goes too long and is missed...

Also there is quite a bit of overhead involved for 16-bit values on the AVR. Stick
with 8-bits values in Interrupt Routines if you can.

April 10, 2014
by sask55
sask55's Avatar

Hi Jim

I really appreciate the time and thought you have put into what you posted. I will be looking closer into some of the points you have made. I have know from the unset that the milling movements will be limited by the master chip CPU. I am probably expecting to much from the master chip but there is reasons that I am attempting to do things the way I am

I have been reasonable happy with the results that I am seeing both on the scoop and the actual motor movements. For this project to work I believe movement sequencing order is far more critical then occasional short delays or pauses in the movement. I have established precise motor movement control at control byte frequencies well above 1000 Hz.

I am hoping to make my mill operate as a CNC unit. Since I have some time now, I will try and give a more detailed description of what I am doing and how it has been working out.

The mechanical skews on the mill will move the table 0.100 inch per revolution. To turn a stepper motor one rotation requires 200 steps (pulses of the motor controller clk pin). Therefore each full step of a motor will produce .0005 inches of movement. Required feed rates of the mill will very from extremely slow to as fast as possible when moving to the next cut. A 1 Khz clk pulse frequency produces a 30 inch/min feed rate which is not fast but I believe will be adequate for none cutting movements.

In order to achieve CNC milling capability the mill movement must follow detailed instructions read from a file that has been generated by design software. These instruction sets include complicated patterns of constantly changing motor movement on any of the three motors. Typically these instruction are “G Code” files. Each G code instruction details a movement of one type or other to complete the task may require thousands of individual G code instructions.

For example if I wish to mill a PC board. I lay out the board using a program like eagle. Then use the Cam processor on eagle to build the board production Gerber files. I then use a program (cirqwizad) to convert the Gerber files to the required G Code files. These G Code files give detailed instructions to produce trace outline type PC boards. Trace outline is a much preferred method to use with a milled board rather then a etched board.

This is an example of a few G code instructions to mill the top side of my master board. The entire file is 7800 lines long, so alot of instructions that could posibly take hours to mill.

   G1 Z-0.05 F200
   G2 X15.956 Y3.754 I-0.003 J0.53 F300
   G2 X15.955 Y3.851 I0.656 J0.055 
   G2 X16.079 Y4.162 I0.53 J-0.031

I have developed software to run thru a Gcode file and convert it to a file consisting of individual mill movement instructions to complete each GCode step. A Gcode instruction is basically a destination point. Circular arcs will also include the center on the arc and the radius of the arc of movement. My mil code file will include every incremental movement to be carried out in precise sequence to direct the mill on the correct path the next Gode staring point. It also includes the correct coordinates of the next Gode starting location in order that the system can verify the movement using the digital calliper feedback.

59 X21.186Y4.368xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

60 X21.184Y4.53Byyyyyyyyyyy

61 X20.906Y5.084yyLxxyxyxyxyxyxyxyxyxyxyyxyxyyxyxyyxyyxyyxyyyxyyyxyyyyxyyyyyyyyyyy

62 X20.914Y5.178Ryyyyyyyy

63 X21.274Y5.686yyyxyyyyxyyyxyyyxyyxyyxyyxyyxyxyyxyxyxyyxyxyxyxyxyxxyxyxyxxyxyxxyxxy

64 X21.352Y5.721xyxxyxxxy

65 X21.516Y5.76xxyxxxxyxxxxxyxx

66 X21.612Y5.763xxxxxxxx

In the event that the mill is to move several inches one G Code instruction would generate a mil code instruction thousands of characters long. The mill code fill contains many constantly changing sequences of motor control commands to be carried out in a specific order. These sequences produce straight lines in any three dimensional direction, or circular arcs of any radius and angular rotation. The code also sets feed rate movement speed to that specified in the Gcode instruction. None of this can be anticipated before it arrives at the mill. All of the mill movement originate from a cam processor file.

The software I am using on the PC reads my mil code file and sends 1 Byte instructions to the master chip on the UART. As each byte is read from the file on the PC it is encoded into an 8 bit command byte that is not very readable to a human. Basically the two LSB of the pc to master command stream direct the master where to send the information contained in the other 6 bits of that byte. The incoming byte may be “addressed” to the master itself. In that case the bit level state (6 bits) of the incoming command byte is used to control power other criteria necessary for proper start up and shut down sequencing to the motor controller chips. The incoming command byte may also direct the master to just pass on the bit state of the six data bits to a specific slave chip. There are two fundamental reasons for the slave chips. As you said I have acess to more input and output pins buy using slaves. As it is I have a total of 3 pins that are not in use on all four chips combined. Just as important, I was have a lot of trouble reading the output from the four callipers with there independent timing cycles. I found it much more direct to dedicate an independent micro to each axis. The slave can report to the master only in the event that the calliper reading has actually changed.

The slave chips are actually kind of busy as well. Each axis is equipped with digital callipers to provide feedback as to the exact position on the mill. The large movement on the X axis is actually read by combining the positions indicated on two separate callipers. The individual digital callipers are doing there own thing on there own time lines. These callipers update there readings about 10 time per second. This relatively long period between calliper readings make position variation a time consuming process. The slave chips are also monitoring the output pins on the motor controller chips. In the event that a controller chip is reporting “HOT” the slave immediately disables on the chip stops the CLK and DIR pin updates and sets up to inform the master to start a power down sequence. Each time a slave receives a input byte on the SPI from the master it returns a information byte to the master. If the incoming SPI byte that the slave receives sets the controller chip CLK pin high the slave sets a timer interrupt. When the timer on the slave fires the CLK pin is pulled low again, the motor controller chip has registered one pulse on the CLK pin. The SPI slave to master return byte contains the logic level on the controller output pins as well as a indicator byte that informs the master that a new calliper reading is available for that axis. The master immediately sends a quick sequence of three dummy bytes to the slave to retrieve the waiting calliper reading. The master then sends that reading to the PC on the UART and also at slower speeds to a LCD display on the mill. The slave chips can also be instructed to reduce the current level sent to the motors by the controllers in the event that the motor speed is very slow. This feature protects the motors from overheating in the event that they are in a high current low speed condition.

The frequency that the bytes are forwarded to the slaves is set by a timer interrupt on the master. The speed timer interrupt on the master is an 8 bit interrupt. The timer target value is set from the PC uart command stream. There is also a possibility of a much longer (slower) frequency determined by the state of a bit in the uart stream.

I have the UART, SPI and a time interrupt operating on the master chip. The slave have SPI, calliper and controller chip inputs as well as a timer interrupt. There is little doubt I am going to have a few problems if this is even possible at all.

Believe it or not this is all actually working more or less as planned. The maximum motor speed is not as high as I had planed. The motor begins to stall at speeds above about 1.5Khz, even when the scope is indicating the controller is delivering the required motor coil voltages. I can live with that.

To date I am sending control commands to one motor at a time (ie straight horizontal and vertical lines). I have complete control of the motor speed, duration and torque(current) level from the PC. I can instruct the motor to move any given amount of rotation with some occasional flutters. I have been attempting to handle, in software, the occasional dropped byte that I believe are probably a result of interrupt clashes. On The other hand chips are more or less blowing up on me occasionally. That can’t be a good sign.

Many may think that this is a crazy plan not likely to ever work very well. That may be correct but honestly it is looking promising to me. Of course I could always just purchase controller boards and software, but what fun would that be. I have this overly optimistic idea that some day I will uses my prototype board to mill and drill the 4 boards required to build the final version of this thing.

Sorry about all the spelling grammar and typo errors, this is likely confusing enough without that added. Hi Jim

I really appreciate the time and thought you have put into what you posted. I will be looking closer into some of the points you have made. I have know from the unset that the milling movements will be limited by the master chip CPU. I am probably expecting to much from the master chip but there is reasons that I am attempting to do things the way I am

I have been reasonable happy with the results that I am seeing both on the scoop and the actual motor movements. For this project to work I believe movement sequencing order is far more critical then occasional short delays or pauses in the movement. I have established precise motor movement control at control byte frequencies well above 1000 Hz.

I am hoping to make my mill operate as a CNC unit. Since I have some time now, I will try and give a more detailed description of what I am doing and how it has been working out.

The mechanical skews on the mill will move the table 0.100 inch per revolution. To turn a stepper motor one rotation requires 200 steps (pulses of the motor controller clk pin). Therefore each full step of a motor will produce .0005 inches of movement. Required feed rates of the mill will very from extremely slow to as fast as possible when moving to the next cut. A 1 Khz clk pulse frequency produces a 30 inch/min feed rate which is not fast but I believe will be adequate for none cutting movements.

In order to achieve CNC milling capability the mill movement must follow detailed instructions read from a file that has been generated by design software. These instruction sets include complicated patterns of constantly changing motor movement on any of the three motors. Typically these instruction are “G Code” files. Each G code instruction details a movement of one type or other to complete the task may require thousands of individual G code instructions.

For example if I wish to mill a PC board. I lay out the board using a program like eagle. Then use the Cam processor on eagle to build the board production Gerber files. I then use a program (cirqwizad) to convert the Gerber files to the required G Code files. These G Code files give detailed instructions to produce trace outline type PC boards. Trace outline is a much preferred method to use with a milled board rather then a etched board.

This is an example of a few G code instructions to mill the top side of my master board. The entire file is 7800 lines long, so alot of instructions that could posibly take hours to mill.

   G1 Z-0.05 F200
   G2 X15.956 Y3.754 I-0.003 J0.53 F300
   G2 X15.955 Y3.851 I0.656 J0.055 
   G2 X16.079 Y4.162 I0.53 J-0.031

I have developed software to run thru a Gcode file and convert it to a file consisting of individual mill movement instructions to complete each GCode step. A Gcode instruction is basically a destination point. Circular arcs will also include the center on the arc and the radius of the arc of movement. My mil code file will include every incremental movement to be carried out in precise sequence to direct the mill on the correct path the next Gode staring point. It also includes the correct coordinates of the next Gode starting location in order that the system can verify the movement using the digital calliper feedback.

59 X21.186Y4.368xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

60 X21.184Y4.53Byyyyyyyyyyy

61 X20.906Y5.084yyLxxyxyxyxyxyxyxyxyxyxyyxyxyyxyxyyxyyxyyxyyyxyyyxyyyyxyyyyyyyyyyy

62 X20.914Y5.178Ryyyyyyyy

63 X21.274Y5.686yyyxyyyyxyyyxyyyxyyxyyxyyxyyxyxyyxyxyxyyxyxyxyxyxyxxyxyxyxxyxyxxyxxy

64 X21.352Y5.721xyxxyxxxy

65 X21.516Y5.76xxyxxxxyxxxxxyxx

66 X21.612Y5.763xxxxxxxx

In the event that the mill is to move several inches one G Code instruction would generate a mil code instruction thousands of characters long. The mill code fill contains many constantly changing sequences of motor control commands to be carried out in a specific order. These sequences produce straight lines in any three dimensional direction, or circular arcs of any radius and angular rotation. The code also sets feed rate movement speed to that specified in the Gcode instruction. None of this can be anticipated before it arrives at the mill. All of the mill movement originate from a cam processor file.

The software I am using on the PC reads my mil code file and sends 1 Byte instructions to the master chip on the UART. As each byte is read from the file on the PC it is encoded into an 8 bit command byte that is not very readable to a human. Basically the two LSB of the pc to master command stream direct the master where to send the information contained in the other 6 bits of that byte. The incoming byte may be “addressed” to the master itself. In that case the bit level state (6 bits) of the incoming command byte is used to control power other criteria necessary for proper start up and shut down sequencing to the motor controller chips. The incoming command byte may also direct the master to just pass on the bit state of the six data bits to a specific slave chip. There are two fundamental reasons for the slave chips. As you said I have acess to more input and output pins buy using slaves. As it is I have a total of 3 pins that are not in use on all four chips combined. Just as important, I was have a lot of trouble reading the output from the four callipers with there independent timing cycles. I found it much more direct to dedicate an independent micro to each axis. The slave can report to the master only in the event that the calliper reading has actually changed.

The slave chips are actually kind of busy as well. Each axis is equipped with digital callipers to provide feedback as to the exact position on the mill. The large movement on the X axis is actually read by combining the positions indicated on two separate callipers. The individual digital callipers are doing there own thing on there own time lines. These callipers update there readings about 10 time per second. This relatively long period between calliper readings make position variation a time consuming process. The slave chips are also monitoring the output pins on the motor controller chips. In the event that a controller chip is reporting “HOT” the slave immediately disables on the chip stops the CLK and DIR pin updates and sets up to inform the master to start a power down sequence. Each time a slave receives a input byte on the SPI from the master it returns a information byte to the master. If the incoming SPI byte that the slave receives sets the controller chip CLK pin high the slave sets a timer interrupt. When the timer on the slave fires the CLK pin is pulled low again, the motor controller chip has registered one pulse on the CLK pin. The SPI slave to master return byte contains the logic level on the controller output pins as well as a indicator byte that informs the master that a new calliper reading is available for that axis. The master immediately sends a quick sequence of three dummy bytes to the slave to retrieve the waiting calliper reading. The master then sends that reading to the PC on the UART and also at slower speeds to a LCD display on the mill. The slave chips can also be instructed to reduce the current level sent to the motors by the controllers in the event that the motor speed is very slow. This feature protects the motors from overheating in the event that they are in a high current low speed condition.

The frequency that the bytes are forwarded to the slaves is set by a timer interrupt on the master. The speed timer interrupt on the master is an 8 bit interrupt. The timer target value is set from the PC uart command stream. There is also a possibility of a much longer (slower) frequency determined by the state of a bit in the uart stream.

I have the UART, SPI and a time interrupt operating on the master chip. The slave have SPI, calliper and controller chip inputs as well as a timer interrupt. There is little doubt I am going to have a few problems if this is even possible at all.

Believe it or not this is all actually working more or less as planned. The maximum motor speed is not as high as I had planed. The motor begins to stall at speeds above about 1.5Khz, even when the scope is indicating the controller is delivering the required motor coil voltages. I can live with that.

To date I am sending control commands to one motor at a time (ie straight horizontal and vertical lines). I have complete control of the motor speed, duration and torque(current) level from the PC. I can instruct the motor to move any given amount of rotation with some occasional flutters. I have been attempting to handle, in software, the occasional dropped byte that I believe are probably a result of interrupt clashes. On The other hand chips are more or less blowing up on me occasionally. That can’t be a good sign.

Many may think that this is a crazy plan not likely to ever work very well. That may be correct but honestly it is looking promising to me. Of course I could always just purchase controller boards and software, but what fun would that be. I have this overly optimistic idea that some day I will uses my prototype board to mill and drill the 4 boards required to build the final version of this thing.

Sorry about all the spelling grammar and typo errors, this is likely confusing enough without that added.

April 10, 2014
by sask55
sask55's Avatar

Something is gone wonky. How did I loss all the linefeeds? I also got two copy pasted in somehow.

April 11, 2014
by JimFrederickson
JimFrederickson's Avatar

You have got a great Project there.

You have already incorporated many of things that I had mentioned.

It seems that G-Code and Serial Communications both still have alot of life left!

Commands can still be sent to the AVR Slaves. Instead of "Speed", it could just be
"step count and rate"...

You could "probably" get rid of the necessity to read the Caliper Settings...

Create a "zeroing routine" so when the AVR Master Powers up it can send a command to
the AVR Slaves so they run to their limits reading the calipers and
verifying/determining steps for unit of measurement.

Then...

The AVR Master could just send the "location to travel to, and the rate" to the AVR
Slaves.

Regardless of what your "calipers say" you can only position to by "steps from the motor". So there really is "not need" to read the calipers for positioning, but it is really good to have them on startup to "verify". (Even if it turns out to be slightly
non-linear, you could use a look-up table created from the Zeroing routine to tackle
that.)

That may help a bit too...

Best of luck...

April 13, 2014
by sask55
sask55's Avatar

I have posted A video on YouTube. The quality is not good but it does show some of the project to date.

I have no experience with CNC of any kind. Judging from my experiences using the mill manually the feed rate is not always easy to determine. I anticipate that it is very likely that there will be times when the stepper motors will not have the capability to produce the torque the mill screws require at the speed programmed. Dulling machine tools, materials that are harder then expected and other factor may cause the feed screw torque requirement to exceed the stepper capabilities. Without some type of feedback to the control source it seams to me that the control system could carry on sending a sequence of commands even if the mill head was completely jammed and not moving at all. Each G code command gives a destination but without and verification that the mill actually made it to that location any skips or misses are cumulative. Any error in position will carry thru the entire run or until verification is carried out. It would be possible to do verification using the movement limit switches but that would require a lot of potentially long distance movements and would be very time consuming. Since I have a working version of the hardware and software on the mil now, to read the positions using digital calipers I thought I should attempt to use that capability on my CNC project. I realize that this will adds a considerable degree of complexity to timing and communications. If I get it working I believe it will be well worth the trouble.

As far as the idea "The AVR Master could just send the "location to travel to, and the rate" to the AVR Slaves." goes I have given this a considerable amount of thought and experimentation. milling something like a PC board could require thousands of very short constantly changing sequences of movements. The reel time order of execution of the three motors is critical. Typical movement sequences like the ones I posted earlier (lines 59 to 66 of 7800) are examples of that type of requirement. I have not thought up any communication system that is more compact then one byte /step that I am now using. For long constant strings of identical commands sending the slave data on speed and step number would be very efficient. The required information could likely be sent in just a few bytes. On short ever changing sequences the required information is much larger then one byte /step. Secondly, how do I ensure timing is correct between slaves, over what could be long periods of time? Using SPI the slaves cannot initiate communication with the master to inform the master a sequence running and timed on the slave is complete. I believe it would be difficult to maintain correct order of execution on the three slaves that are using three separate clocks. I considered doing the motor controllers CLK pins all from the master chip as I could manage to find three pin on the master to use for sequencing and timing the movements from a single source. The problem I see there is the potential of having a “race condition” if most of the input pins on the controllers are driven from a slave and the clk pin is driven from the master. I think there could possibly be times when it may be difficult to know which pin change would occur first, and what would be the consequences of a clk arriving before a direction pin change.

That is a lot of detail. The important thing is the communication system I have set up is capable of delivering the command data to the motor coils much faster then the motor is capable of responding to the changes. I have set up the software on the master chip to compensate for faster clocking rates by reducing functionality. As the master chip timer interrupt trigger interval is reduced the master begins to lighten its workload. At one point the positional information is no longer written to the LCD screen. At very high speeds the master stops calling for the caliper readout data from the slaves. All this seams to be working. The position is changing so fast that the readout is meaningless at that point anyway the PC can request a Caliper reading after the movement stops or slows down.

Now; if I could only determine why I am getting the occasional sudden and dramatic chip failure. Whenever I touch the chips with my finger to gauge their temp they are not even warm yet I have had two chips blow doing nothing new at all.

April 14, 2014
by JimFrederickson
JimFrederickson's Avatar

I think you will find that if your motors "lack the torque required to perform a step"
that you will end up with chatter, non-reproducability, and possibly shorten motor life.

Motors can "lack torque" for two reasons.

1 - They are just too small and/or geared to high (Steppers, by definition are not "geared per se'", but
depending on how they are attached to the mill may/may not produce
a "gearing/leveraged affect. Hopefully that clarifies what I am getting at.)
2 - They are being pushed too fast and/or too deep for the material being milled

Using the Calipers as you are though, for constant feedback, is interesting.

I am sure you will work through it, you have gotten a long way in any case.

Post a Reply

Please log in to post a reply.

Did you know that our USB NerdKit comes with everything you need to get started with microcontrollers? Learn more...