NerdKits - electronics education for a digital generation

You are not logged in. [log in]

NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.

Project Help and Ideas » CNC mill project.

December 02, 2014
by sask55
sask55's Avatar

Ralph

This post is more or less in response to your question in the tread Rick started about his 3 D printer. I did not want to change the focus of that thread from the interesting discussion about 3D printers so I started this thread. I may be adding to this thread in the future if I get anything that might be interesting to post about my progress..

My mill project is not typical in any way. I quest I could say I am much more focused on learning a bit about electronics and command and control electronics then actually producing a mill. It started with a few simple experiments I did driving stepper motors which I had salvaged from old printers and scanners. I also had done some work with reading from digital callipers. Since I now have installed digital callipers on my mill and I am able to read the location of the mill head on the PC I decided to attempt to use that capability to get feedback to the CNC mill control. I realize that this capability is not typical or required in any way. It may not even be desirable as it may just slow things down and add another layer of unnecessary complexity to the system.

So; to answer your question my entire system hardware, firmware, and software are completely custom designed from a very basic level. I have been playing around with the various aspects of this idea for years now. I just got things set up again this fall after a late harvest and it takes me a will to re-familiarize myself with where I am and the tools and programs I have been am using. To date I have not actually done a test on the mill although I am now very close to that point. I am just operating the motor on my bench and attempting to resolve a couple of issues in software to smooth out the motor control.

Currently my system is designed to move the mill in a precise three axis control originating from either Gcode files or manual control comands from the GUI on the computer. These files can be generated from a many programs. Eagle for printed circuit boards is what I am playing around with now

The CNC milling process is working, except for a couple of reactivity small hiccups, will go something like this.

1- If I have a PC board that I would like to produce using the cnc mill. I would require a set of Gerber files. These files can be produced using the CAM processor available on Eagle after the circuit schematic and layout are completed.

2- I then open the eagle generated files in cirQwizard to produce a set of Gcode files. CirQwizard will produce Gcode instruction files for what is called a Insulation milling of the board copper layers and a Gcode instruction set for the drilling of the board.

3- I open the Gcode files that have been generated by CirQwizard in a file conversions program I developed in C#. The Gcode is converted to a serious of individual motor step and speed changes that the mill must perform to complete each Gcode instruction. These stepper motor steps (mill movements) must be carried out in precise order and timing to control the mill movement direction and speed. Since it will require 2000 individual steps to move the mill head 1 inch these command sequences can be very long and constantly changing patterns to produce circular or straight mill movements in any direction.

G1 Z-0.05 F200

G2 X15.956 Y3.754 I-0.003 J0.53 F300

G2 X15.955 Y3.851 I0.656 J0.055

G2 X16.079 Y4.162 I0.53 J-0.031

Example of 4 Gcode instruction from a top layer of a PC board milling file. There are over 8000 of these G code instructions in the file to mill the master board for my project. Some of them produce .mil code instruction sets that are much longer then these examples.

8 Z-0.05S200zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz

9 X15.956Y3.754S300yLxxxxxxxxxyxxxyxxxyxxyxxyxxyxxyxyxyxxyxyxyxyxyxyyxyxyxyyxyyxyyxyyxyyyxyyyyxyyyyyy

10 X15.955Y3.851yyyyyyy

11 X16.079Y4.162yyyyyyyRxyyyyxyyyxyyxyyxyyxyxyyxyxy

The same four instructions in my “.mil step file” format after conversion from G code. This is the file format that the CNC software on the PC uses to command mill movements. A z will step the horizontal axis one stepper motor step. An x moves the table left or right, a y moves the table forward or back. The S200 and S300 command motor step timing changes, setting the feed rate of the mill table movements. R and L command a change in the rotational direction of the X axis motor, resulting in the table moving ether left or right as required. X15.956 is the destination location in mm of the current mill movement X axis, the movement can be verified using the calliper reading on the x axis after the command is completed, if that is desired

4- I then open the .mil control file in a CNC command counsel on the PC. The PC is connected to the mill hardware using UART serial. I developed that command console using C#. It gives full three axis control of the mill movements either from a “.mil” CNC file or from a user interface GUI on the PC (manual movments). The Uart also returns information from the mill regarding calliper readings, movement limit switches and motor controller output pin statues. Using that console I instruct the PC to begin CNC milling using the instructions in the .mil file. The mill movement commands are read from the file and directed to the appropriate stepper motors with precise sequence and timing maintained.

I would not recommend this approach. All of what I have done is widely available in a number of exiting systems and packages. It is a bit like designing and building my own “wheel” when there are plenty of very nice well made wheels available for me to buy at the same, posiblily less cost. If I really needed a good CNC milling machine I could purchase any number of them that would likely work well, my approach is at the other extreme to that ready to start milling approach.

I am not sure why I am doing it this way. It is about as far from practical as I could be. I guest it is mostly just to see if I can actually do it. I am certainly learning a considerable amount along the way. Everything from using background workers and multithreading in C#, extensive use of regular expressions to parse and work with text strings, using interrupts, SPI, IPS, hardware issues have been a learning experience. For me this project is not so much the about the destination it is more about the trip that I find interesting and challenging. .

Darryl

December 02, 2014
by Ralphxyz
Ralphxyz's Avatar

That is cool Darryl, I can really identify with your quest and completely understand your method.

You could always look at the source code for Marlin off Github or other 3D printing firmware to get a idea how they do things.

I am envious I no longer have the energy, must be that darn AGE viris.

I can not believe Rick called me old, I am not OLD I am aged like a fine wine or good whiskey or a prime piece of meat.

Ralph

December 02, 2014
by JimFrederickson
JimFrederickson's Avatar

Hello Darryl,

Maybe the answer to "I am not sure why I am doing it this way. It is about as far
from practical as I could be" is...

You have been a Farmer for too long...

ALL of the Famers I know are intrinsically "PRACTICAL"...

Maybe now it is time for something "impractical from which you will learn something
completely different".

Of course if "EVERYONE WAS ALWAYS ONLY PRACTICAL, imagine how many things would not exist"...

I too had been wondering how you have been coming along.

I thought about inquiring, but for various reasons I didn't. It is still nice to read about your progress.

December 03, 2014
by Rick_S
Rick_S's Avatar

Ralph Said:


I can not believe Rick called me old


Hey, I'm old too BigGrin_Green Granted, maybe not as old as you though! Tongue in Cheek

Darryl,

Readout feedback is critical on "Real" CNC machine tools. They not only provide a visual for the machine operator to see that the machine is in reality where it should be, they also provide feedback to the machine control to ensure that the machine is where it needs to be at any given time. If the readout feedback and the machine's expected position do not match within a certain margin of error (Required due to accelleration/decelleration latencies), the machine control will error out and shutdown to prevent scrapping a part. Encoder feedback is one thing that a good 3D printer or CNC both should have, so don't abandon that concept if you can make it work efficiently. I know the CNC machines at my place of work have rotary encoders on the spindle's to verify RPM, rotary encoders on the AC Servo motors to ensure they are rotating to the proper positions, Linear scales on all axis, X, Y, Z, W (Spindle), A(Rotary Tables), and some others with additional axis. I've often thought that feedback was something sorely missing from many hobby machines.

Rick

December 03, 2014
by sask55
sask55's Avatar

Thanks for the interest Guys.

I am having an issue with short pauses in movement of the stepper motors, occasional shutters might describe it best. I believe this issue may be originating form data flow timing on the master chip. At the typical milling speed the frequency of the signals sent to the slave chips is paced by the master chip. The incoming information coming from the computer is coming in on the uart serial. There is a limited amount of memory available on the master to hold the incoming sequence of control movements before the time is right to make the movement on the mill. The uart connection is capable of delivering this data stream much faster then it is being used. To address this I am delivering the uart control sequence data in packets of a set size. The PC waits until the master signals it is ready to receive another packet into a circular buff type storage variable. At the same time the master is using timer interrupts to send the time critical control characters to the appropriate slave chips. As these control characters are sent to a slave chip they are being removed from the other end of the buffer on the master chip.

I have the uart read, uart write, and GUI screen functions all running on separate threads on the PC. Currently, I am having some difficulty determining what the exact nature of the occasional pause in the motor movement is related to. It is definitely software or data flow related as the scope shows a corresponding unwanted delay in the control signal flowing to the slaves when a shutter is observe on a motor. I suspect it is related to packet handling. Possibly interrupt clash issue. I am thinking about ways to isolate and identify more closely what is going on. Once I get a better understanding of what is happening it may be possible to change something to mitigate this issue. I think it is on the master chip that the delay is originating although it could possibly be on the PC as well.

I will try to keep you informed. This thing is very involved with a lot going on. It would be very difficult for me to provide enough information on this forum to bring anyone up to speed on the exact nature of my current issue. I am not certain what the underling causes are, it may be some relatively simple solution I have overlooked in coding or it could be a generally flaw in my entire approach. This is certainly not the first unexpected result I have come across. I am reasonably confident it will be resolved as I close in on the exact nature of the problem.

Darryl

December 06, 2014
by sask55
sask55's Avatar

I am having some trouble understanding how to determine, estimate or measure the available memory on the micro after it is loaded with a program. It is not clear to me what memory is used for what purposes by the chip. I am using a 328P as my master on my setup. From the data sheet there will be 32K bytes of flash, 1 K bytes of EEPROM and 2K Bytes of SRAM. I am programming this chip using ISP with Khazama AVR Programmer software on a PC.

I don’t actually need to know the details of the memory allocation but I would like to get a better handle on how large of a uint8 array variable I can safely allocate in my code. The Khazama programmer reports 9866 Bytes of flash verified when I load the flash with my code. Within that code I am currently declaring a single dimensional 8 bit, unsigned array of 400 bytes size, uint8_t.

Here are a few questions I have.

What is the significance of the _t on the variable syntax? Or; what is the difference between uint8 and uint8 _t ? which would be better to get the absolute largest possible array size?

Is the size of the arrays declared in a program included in the flash size reported by the programmer?

Specifically how large of a uint8[] array could I safely declare and then fill without running into problems? Could I declare a 10,000 byte array on this chip?

I am attempting to smooth out data flow in my system. I am considering trying much larger data packet sizes on the serial connection. If I had a large enough array I could packet the data into individual Gcode command lines. This would mean that each Gcode comanded movment would arive in its own individual packet for all but the longest Gcode movements. I believe that change alone would smooth out the brief pulses I now am seeing between packets on the uart.

I could do some trial and error testing to determine this, but I am hoping to get a more intelligent method of limiting my array size and be confidence that it is not going to be a problem.

December 08, 2014
by BobaMosfet
BobaMosfet's Avatar

Darryl-

The 'unint8_t' type is defined in stdint.h. It's just an unsigned char. The '_t' is a naming convention mandated by 'C99' at the time; and has no bearing on how things compile. In the 328, memory spaces are separate, and your compile log should tell you how much space you're using in each when the build is complete.

Arrays are consecutive and packed, that's why you can use pointer indexing to walk an array. So, 400 8-bit chars is just that. 400, 8-bit chars.

I would identify specifically which interrupt is being serviced at the expense of one that isn't, and then once I know which one, reevaluate how I'd handle that interrupt, or how much code I execute during that interrupt.

Hope that helps

BM

December 08, 2014
by JimFrederickson
JimFrederickson's Avatar

On all of my Projects I find using the "avr-size" command to be very helpful.

It provides a "basic distribution" of what memory is being used in a module of code.

You can also use "trial and error" to figure out how your various allocations are affecting things if you are not sure what impact a change may have.

Just change 1 thing recompile and reprogram your chip. Then you will know how much
that one change affects things. (avr-size can be used the same way. I always run
it from the makefile for every module I compile as well as well as for the entire
Project.)

Also Library Functions that you use are not "free". (In terms of Program Space and
sometimes RAM.)

If you have a large program sometimes every bit of Program Space becomes necessary.

If you do not need to use Floating Point Math, that can save a significant portion of
the Program Space.

The same goes for the Print Functions too.

Sometimes making your "Programming Life" a little more difficult can make a Project
possible. (Within the contraints of the Microcontroller.)

December 08, 2014
by sask55
sask55's Avatar

Thanks BM & Jim

That information may be of some help.

I think I have made some progress. I am changing the order that I do things on the PC C# code end of the communication stream to the micro. I thought that with 32K bytes of flash memory and about 10K of flash loaded in program I should have room for at least 10K of array space on the master. In my trails it appears that the micro crashes if I try to use a array much above 3K uint8_t[3000]. With my NEW approach to building and sending the packets of serial to the micro I think I may have mostly resolved the issue I was having. Moving larger packets at one time is not critical and may even be counter productive.

I don’t know if anyone is interested but this is the general command /control data communication system I am using. I don’t know how clear or understandable this may be. I find it difficult to explain.

Although the command sequences are held in an ASCII char file (my .mil file), that is not what is transmitted to the master chip. Each character read from the .mil file is processed using bitwise functions in #c code. The serial stream sent to the master is a BYTE or UINT8 steam and is not readable by ASCII. These command bytes are generated by setting or clearing the individual bit values in each byte using bitwise functions. In a way it is not so much the value of the byte that is important but the state of the individual bits within the byte the is used.

The two LSB of each byte are used to “address” each byte. Those bits are used by the master to determine where to send the remaining 6 bits. They instruct the master to send the control bits to one of x axis slave, y axis slave, z axis slave, or master chip for action to be carry out. One bit in each control byte is used to choose between two possible command sets. That leaves two sets of up to 5 five bit bitwise instructions in each incoming byte. This approach enables the PC to change the state of up to five input pins on an individual motor controller chip by sending one control byte to the master chip. So the motor controller’s TORQUE pins, DIR pin, CLK pin and ENABLE pin are all set independently with one incoming command byte that is loaded (five bits only) into the appropriate, output enabled, PORT register on a slave chip which is connected to a motor controller chips input pins.

Darryl

December 08, 2014
by JimFrederickson
JimFrederickson's Avatar

Hello Daryl,

I have a question...

You keep talking about 10k of Array Space, and I am a little confused whas you are referring to? (It seems you are referring to a "data buffer"?)

If this "Array" is going to flash then it is basically a "Static Array"...

Are you trying to define a 10k Array for Data Storage?

If you are using a Mega328 (Yes I did read you said you are using a 328p.)

 32kb Flash Program Memory

 1kb EEPROM

 2kb RAM

Yes, there is 32kb of Flash For Program Storage and Static Data Storage. (It is not
really practical, but possible, to use the Flash for Data Storage that changes.
This is not something that is commonly done.)

The 2kb RAM is what is "Primarily Used for Data Storage".

Any variable you define in C, unless specified otherwise, is going to end up in RAM.

i.e.

    void mylcd_init() {
        uint8_t i;

        mylcd.pos = 0;
        mylcd.charcount = 0;

    //  Initially when we initialize the LCD we need to make sure a 'full refresh'
    //  is done so the buffer and the display are written with different values

        lcd_init();

        for (i = 0; i < (MYLCDYCOUNT * MYLCDXCOUNT); i++) {
            mylcd.buffer[i] = 32;
            mylcd.display[i] = INVALID;
            }

        lcd_goto_position(0, 0);
    }

Here the "uint_t" takes up 1 byte of RAM.

I am just attempting to be clear on what you are trying to do...

December 09, 2014
by sask55
sask55's Avatar

Thanks Jim

That is exactly what I was asking about. Where are the arrays & variable held in the memory allocation on the chip? I was thinking it may not be the flash. I do have some memory of this from a while back but could not recall the details and could not seam to find information about it. This 2 k limit explains why I am not able to use larger data arrays. I will reduce the size of my uint buffer and other variables to make certain remain within 2k to fit in the Ram. Or! perhaps I could take a look at using some type of memory expansion chip on the master board SRam or something?

Darryl

December 09, 2014
by sask55
sask55's Avatar

After thinking about Jims post it is very obviously that the variable are not held in the flash. I dont know how I started down that road. Just the fact that the loaded flash is not volatile and the variables are should have clued me in.

December 09, 2014
by BobaMosfet
BobaMosfet's Avatar

Darryl-

As I stated in my earlier response; the datasheet tells you the flash, data, eeprom, and sram sections are all separate. Don't knock yourself over it-- you're focused. It happens. That's why it's good to talk about it-- frees the mind :)

BM

December 10, 2014
by JimFrederickson
JimFrederickson's Avatar

I wouldn't advise using "external RAM".

Except for certain specific instances I find it more of a pain than it's worth.

If you stick with DIP Packages, just for ease of use and ability to directly use on
proto-boards, you have the Mega644 and the Mega1280. (These are both 40 pin chips.)

The Mega644 will give you 4kb of RAM and the Mega1280 will give you 16kb of RAM.

These are stil AVR CPU's so the programming model is mostly what you have become
used to.

If you go the XMega route, which brings other interesting things to the table, you
can get even more RAM. XMega does not have any DIP Packages though. For that, if
you want to make it as easy as possible, I would get one of the Schmartboard QFP/QFN to
DIP Adapters. (Radio Shack cells a subset and you can order directly from Schmartboard too.)

XMega is a bit different to configure, but once you configure it there is not much
differences for what you are doing. The Primary Difference is the OPERATING VOLTAGE.
XMega is 3.6v and below. (Which could be an annoying change for your project.)

For you "pausing"/"stuttering" issues you really do want to make sure you are doing
a "little as necessary" within the Interrupt Service Routines. Also get rid of any
of the "DelayMS" Function Calls if you are using them.

A common mistake is "updating the display constantly. (That often eats alot of
time/cycles.) Only update your display when you need to.

December 10, 2014
by sask55
sask55's Avatar

Good information Jim.

I may not nead to upgrade the micro. It seams to be working smoother after changing the way I send packets from pc. l have attemptd to make the masrer chip do as little as possible by droping some of its usual duties when it is sending contol bytes to the slaves. I thought There's no piont updating the lcd or the caliper readjngs on the PC if they are changing to quickly to use.

I may still have an issue with the timer interupt that is setting the pace for spi to the slaves occasionally clashing with the uart RX interupt that occurs when the PC sends a pacet to the buffer on the master. The timer inerupt interval may very considerably as the motor speed is changed by thr PC. The RX on the uart will occur after the master signals the PC that there is room in the buffer for the next packet.

I am away ftom home sending this from my phone.

I will post how things are going.

Thanks for the info.

Darryl

December 10, 2014
by JimFrederickson
JimFrederickson's Avatar

I, personally, never cease to be amazed with how much can be done with Microcontrollers. (I am not necessarily referring only to "capacity/cabability of the Microcontrollers
but the uses people put them to as well...)

Having started out, with Microcontrollers, in the days of 8031's, 8748's, and 8751's
there has been both alot of change, in some, and little change in others. (Always seems
a bit strange to me on that front.)

I liked programming for the Intel Chips better, as mostly I do like Assembler, but
I became a big supporter/user of Atmel Chips SOLELY because they were the first
Company to really adopt Flash, onchip RAM, and a standard Programming Model at
at a reasonable price. (By "Programming Model" I am referring specifically to the
"actual chip programming interface"... I am no fan of their choice of Machine Code
Arhcitecture. :( )

I am a bit confused by your statement "the Timer Interrupt Interval may vary
considerably"?

Are you changing the "actual interval of the timer interrupt" on the fly?

Or

Did you mean the "time to service the Timer Interrupt may vary considerably"?

December 10, 2014
by sask55
sask55's Avatar

Jim

I am not certain it qualifies as on the fly, but yes I am changing the interrupt OCR0A register value to set control the motor speeds. The ClK pulse frequency to the contollers will dictate motor speed. The motor speed can be set using the GUI interface on the PC or be programmed in to a Gcode file. I change the values in the TIMSK0 to start or stop the timer. Clearing CS00,SC01,SC02 to stop the timer or use the 1/1024 prescaler to run the timer. So if the command is given to change mill feed rate in a G code command. . The timer would be stopped, then the value on OCRA is changed then the prescaler is reset to 1/1024. The master is now sending CLK pulse (and other commands) to the slave chips at the new frequency.

As it turns out the required maximum frequency that my motors will operate at is not very high. I have the time prescaler set to 1/1024. Using the clear timer on compare match mode and setting the value in the top register OCRA, sets the interval for a timer generated frequency.

On my system I have incorporated two possible speed ranges for the movement of the table. In the faster range the software just writes a byte to the OCRA register to change the motor speed. There are a couple of reserve bytes that command other action 254 & 255. An OCRA value of 253 is the slowest speed that can be set in this range and corresponds to about 1.71 inches /minute feed rate on the mill. The fastest reliable speed is about 27 inches /minute and that is obtained by loading OCRA with a value of 16.

From tests using my scope I have determined that any faster frequency then that the motor controllers are maxed out at 100% duty cycle on the PWM. In other words the coil resistance and impedance are limiting the current to less then the full rated 3A even thought the controller is applying the full 24V supply voltage to the coils for the entire duration of each motor phase.

I have no idea if this is good practice. It does seam to be working as I had planned except for the ode, very short delay in movement.

December 22, 2014
by sask55
sask55's Avatar

I am having a lot of trouble understanding the results I am seeing on my setup. I am beginning to suspect that RAM on the chip is being unintentionally overwritten at times, if that is possible.

What would be the expected consequences of having arrays or other variables initialized in the code on a micro that required more RAM then is available on that chip? How can I determine the amount of RAM in use? Could writing to an array variable somehow rap around and rewrite to unexpected elements of the same array if the array was initialized larger then available RAM space on the chip? What, if any, warnings, error messages, or other results would I expected to see if I am trying to use more RAM then is actually available on the chip?

Darryl

December 22, 2014
by JimFrederickson
JimFrederickson's Avatar

First of all...

It may take a few reads and hours of thought to determine how this fits in with your
Project.

When you are dealing with "multi-tasking systems" there is a need to think about
things in a different manner, and to change how things may normally be done.

There becomes a "great need" to "delegate authority/control" over certains aspects
of the program and how it operates.

Hopefully, I do not have any agregious erros...
(In my defense, my time is short and I haven't error checked my post as much as I would Like! :) )

One of the things I have found helpful, is to have a small set of Functions that I use
where there is a "Critical Error". For me this is mandatory for Multi-tasking
Programs". (Also for me "multi-tasking" as any program with more than a
Fore-ground Task. So Forgegrond and Background are 2 tasks.)

The "SOLE PURPOSE" of these functions is to

1 - "Trap a Critical Error"

2 - "Stop Interrupts"

3 - "Display/Output the Error Code"... (I say "Display/Output" because a Project doesn't always have a display.)

In regards to your initial question...

"Yes RAM can be over-written". (Errant code, or Data Collisions...)

Remember...

Data Variables you define in the program are stored from "Low-Memory to High-Memory"...

The Stack is used from "High-Memory to Low-Memory"...

So if there is not enough room available after your Variables for the Stack the
Stack will start to "Eat your Data" and/or your "Data will cause your Program
Functions to fail because their Data is corrupt or because their Return Addresses
are corrupt.

If you tend to "nest alot of Function Calls" or have "Functions that declare alot of
local data", (since every Function Call takes Stack Space, and all Data Local to a
Function takes Stack Space), that can for the Stack Down into your Data.

One thing I also ALWAYS DO on my Multi-Tasking Projects is "monitor Stack Depth".

The Process I use is "not fool-proof" and is "not 100% accurate", but is does give me
an Idea of what is happening...

local_i16 = (SPH * 256) + SPL;

if (local_i16 < system_stackdepth) {
    system_stackdepth = local_i16;
    }

So "system_stackdepth" will contain the lowest value the Stack gets to in RAM.
(I usually put this code into some Timer Interrupt and update it whenever the
Timer Interrupt is run.

I also use, (which I mentioned before):

        avr-size -B sys-main.o
        avr-objdump -t -S -D sys-main.o > main.asm

"avr-size" will tell you how many bytes "each section of your program is using". (Like
the "Data Section".)

"avr-objdump" will create an ".asm" file for your code. I mainly use that to see where
data is going and how much space it uses. Sometimes I look at the code too, but
that can be difficult because it seems to create a series of "partial code" and I
find it hard to sometimes tell excactly what is going on...

BUT, I do find the Table at the front that show where all your Data and Labels are
IMMENSLEY useful.

Since the AVR is a single core there is really only 2 Tasks States possible...

There are "Foregound Tasks" and a "Background Tasks".
("Background Tasks" being Tasks taken care of by Interrupt Functions.)

Lastly...
(2 things here...)

A:

In a Multi-Tasking system, which yours definately is, ALL VARIABLES that may be changed
in an Interrupt Function must be declared as Volatile.

c is basically creating/optimizing a program for "Foreground Operations".

An "Interrupt" is something that is "Outside of Foreground".

What declaring a variable as "Volatile" does is it forces the code created by the
compiler to make sure to ALWAYS get the value of the Data Item from RAM when it is
used. Otherwise the code created by the Compiler may keep the variable in a
register and the code won't know it has been changed.

B:

You need to implement a "semaphore system".
Actually a series of semaphores of different styles.

For me only 3 such mechanisms are ever used.

1 - Data variables changed by Interrupt Routines only (Background Variables)

2 - Data variables that need to be lockable

3 - Data variables that need to be changed in Foreground Tasks that are used by
Background Tasks

Mechanism 1 handles Background to Forground Data:

While this could be used in the other direction, "variables changed by
Foreground Tasks", I don't usually use it that way. (Although if I have
multiple Foreground Tasks" I do sometimes use it between them.)

So for things like "Time".

Let's say you have a Structure like:
typedef struct { uint8_t state; uint16_t mils; uint16_t minutes; uint16_t days; uint16_t workcount; uint16_t workcycles; uint16_t workcyclesk; } STRUCT_TIME;

Let's say the "Timer Interrupt Function" updates this structure. (Well yes it is a
"Structure Definition" and "Technically not yet a Structure", but you get the idea.)

Now lets say the "Foreground Task" needs to read the "minutes".

You can't reliably use code to just read the value in "minutes".

Because...

"minutes" is a 16-bit value and it will take code to get the low 8-bits and then the
high 8-bits. Sometimes, although not very often, a "Timer Interrupt Function" will
get called in between getting those 2 bytes.

Corrupted data...

So how do we first that?

There is the "state" variable.

        i = BOOLTRUE;

        while (i) {
            istate == mytime.state;

            iminute = mytime.minute;

            if istate == mytime.state {
                i = BOOLFALSE;
                }
            }

So the structure "mytime" is ONLY changed by the "Timer Interrupt Function".

The "state" variable is an 8-bit value begining with 0 that is "xored with 1" on
"Timer Interrupt Function Call".

When the "Foreground Task" needs to get a value it must take care to make sure that
nothing changes in between accesses.

So so a "while is used.

If the "state" is the same after reading the variable as it was prior to reading the
variable when we know we have successfully read the variable.

Mechanism 2 handles Background Access to Foreground Variables:

This ones is easier.

I would still use a "state" variable.

But this time I would use only 0 or 1.

So when a "Task" is using the variable it would write a "1" into state.

If during the execution of the complimentary "Task", "Foreground/Background, that
needs to use that variable, it will first check to see if the "state" is "0".

If it is then all is good.

If "state" isn't "0" then the "Task" can't use the variable this time.

NOTE: If the "Task" that neesds to use the "variable" is a "Background Task" it
can't wait, because there is only 1 core and the "Forground Task" wil never get
any time to do anything.

NOTE: Sometimes what I will do is use this to tell a "Background Task" when the
"Foreground Task" wants data. So the "Foreground Task" set "state" to "0".
When the "Background Task" sees the "state" is "0" it then puts in the data and
sets the "state" to "1".

Mechanism 3 handles variables used by "Background Tasks" that are changed by
"Foreground Tasks"

Since there is ONLY 1 Core in an AVR whenever "Foreground Code is being executed" we know at that point there "is no Interrupt Function being executed".

But the "Foregound Task" can't just randomly change things?

So it needs to "stop the interrupts temporarily":

        cli();
        mytime.mils = 0;
        mytime.minutes = 0;
        mytime.days = 0;
        mytime.workcount = 0;
        mytime.workcycles = 0;
        mytime.workcyclesk = 0;
        sei();
December 23, 2014
by Rick_S
Rick_S's Avatar

Wow, great post Jim! A lot to absorb, but it's pretty obvious you've been in this rodeo before. Thanks for taking the time to share your knowledge.

Rick

December 23, 2014
by BobaMosfet
BobaMosfet's Avatar

I would check the datasheet for how memory is organized. Jim Frederickson has described the typical CPU architecture whereby the entire available RAM block is divided into sections (zero page registers down low, pointers and handles for memory management, then application zone (code space), which is followed by actual allocation blocks, followed by stackchain up high-- the entire block colloquially called 'The Heap'.

In the 328, Flash contains code low, bootloader high. Apart from Flash, a separate space exists: SRAM (which contains data registers and stack chain).

Being a good coder, and developing advanced software, means it's important to understand not just the chip architecture, but how your compiler compiles code as well-- how the code actually runs on a chip.

Beyond that, it's necessary to begin understanding how to code for performance, and how to solve decoupling code sections that have to operate at difference speeds, without dragging everything down to lowest common denominator.

BM

December 23, 2014
by BobaMosfet
BobaMosfet's Avatar

An example of decoupling, and quite possibly one of the most useful things you will ever use in code-- is the concept of a queue. A queue is a section of memory (can be an array), that has 2 indexes or pointers. It can be a linked-list or an array-- for MCUs, I recommend an array.

I'll use the term 'marker' to mean either index or pointer, depending on what you do.

You need two markers-- a head marker, and a tail marker. Initially, they are the same value. As things are added to the queue, the head-marker is incremented. As things are taken off the queue, the tail marker is incremented. You know you have nothing in the queue, if the tail marker catches up to the head marker. You overwrite your tail marker, if your head marker overtakes it-- which usually means your queue needs to be a little bigger, or you need to process the tail of the queue faster.

I usually work it out, so the code the processes the tail, is much, much faster than the code that processes the head-- this way the queue never overruns, is very lively, but allows 2 different pieces of code to operate harmoniously without dragging each other down.

BM

December 23, 2014
by sask55
sask55's Avatar

Thanks again Jim!

I totally agree with Rick that is a great post. As you can tell I am in way over my head hear. It is going to take me a while to try and digest and make use of that information. I am going to do some more reading. I have been writing code with a few bad assumptions and misunderstandings that I now have to try and correct. It now seams kind of amazing I have managed to get this far along with what appeared to be working code.

Darryl

December 23, 2014
by sask55
sask55's Avatar

BM I just read your posts.

Thanks, I think I am just now starting to get a feel for how many things I have not considered in detail up to this point. Again I am now rethinking and examining my approach.

I think NerdKits still is an amazing forum, with a lot of detailed information. I am very grateful for the friendly, very knowledgeable and patient members posting concise often useful information.
Thanks again.

Darryl

December 23, 2014
by BobaMosfet
BobaMosfet's Avatar

Darryl-

Just, please don't stop.

It can seem like a steep curve to climb at first, but it will get easier. As you learn more of these things, your code will improve, and you will adjust how you approach problems. It starts becoming a lot of fun, and in fact, some problems that initially seem intractable, often are solved with incredibly simple, and elegant solutions.

BM

December 23, 2014
by JKITSON
JKITSON's Avatar

Boba is correct

I spent 4 years off & on trying to get the BCD to work the way I needed. Turns out the one way that worked is so simple I am ashamed of my self for not seeing it before. You all are way above me in this thread but I am getting some insight into this. Keep going as I like to learn. Thanks all..

Jim

December 23, 2014
by JimFrederickson
JimFrederickson's Avatar

I didn't take your post as a thought of "quitting", but rather realizing you are
learning...

I already know you have "perseverence embedded within you".

I have "always believed" that it is IMPOSSIBLE to learn Computers "step-by-step"...

I believe you "have to learn Computers by OVERLOAD".

When you get a book, or any topic, read through one time. DO NOT try to comprehend
everything as you go along.

Then wait... A day, a week, whatever seems right to you...

Process what you read... Think about it...

Then "re-read again".
This time, try to comprehend some of the things as you go along..

Some things will stick,
some things will not stick,
What sticks for one person, won't be what sticks for another...

rinse and repeat, as many times as necesary. :)
Eventtually it will become clearer...

There is just "too much in Computers that is interconnected. There are too many
things that can't make any sense without knowing other things. There are some
groups of things that can't make any sense without knowing how each part of the
group works."

I think you already have everything down that needs to be done...

You have a multiprocessor environment, you have heartbeat/clock to synchronize your
processors to the Master.

There are only 3 more things left now.

1 - Make sure to safely collect/access/change your variables using semaphores.

2 - Determine what your necessary Tasks are. (you already know that, you just need
to think of what needs to be done in terms of Mutliple Tasks operating within a
system of Tasks.)

3 - Allocate your resources so that the Tasks can get accomplished together. (Mostly, for this, that is CPU and RAM.)

Your application has multiple things you want to do at the time saime, but the AVR
is only single core, so your code will only do 1 thing at a time. Interrupts do not
enable your code to do more than 1 thing at a time, they are only a mechanism to
"suspend your current execution" so that "your code can do something of
higher priority/greater need".

Basically everthing you want to do has constraints to it.

Somethings have to be done within specific time constraints, and other things can
wait.

Of course there are the "peripherals" on the AVR Microcontroller that do operate
independently of the AVR Core. So that does provice some Parallelism.

The 2 things I see most that need to be done away with are the "delayms()" and
the constant updating of the LCD. (Although I am not sure if you even have an LCD?)

The ONLY TIME the "delayms()" should be used is during Initialization. (Just because
it's so convenient/easy and less error prone.)

After that though no-more.

Queues are great for "putting off low priority things" until your code has time.

January 02, 2015
by sask55
sask55's Avatar

I thought I would just update my progress on my project. I have not had much of a chance in the last few days to work out the hesitation issue. Between holiday family events, work related year end tasks and other commitments I don’t get as much time to fiddle with this project as I would like. We have a trip booked to Hawaii from Mid January to mid February and quite a list of things I should try and get done before then.

I am defiantly continuing my project, but likely will not post anything for a while unless I have a question or two that may come up while I am investigating and thinking about different concepts and approaches.

I greatly appreciate the suggestions and input I have got hear. My lack of response was simply a matter of time commitment not frustration or disinterest.

Darryl

March 07, 2015
by sask55
sask55's Avatar

It may not be too much of a surprise to anyone but, after countless hours of reading testing ideas and experimenting, I think I am beat I just cannot get the results I was hoping for on my project. I am considering ordering a ready made motor controller system on line for my mill.

I have made a member of changes to my approach. Unfortunately most of the changes that I have attempted seam to have resulted in worse performance then I originally had. I think I am expecting to much from the micro handling the master SPI. A lot of the time I am having a great deal of trouble understanding the issues I am seeing. I make what I believe will be a small change and often get totally unexpected results that I can even explain. I am still thinking or this but am not hopeful.

I considered posting a version of the code I have tried here on this forum. I don’t think it lends itself to a casual look very well, I believe it would take some considerable time for anyone to get a handle on what is going on well enough to make any comments on changes I could try.

Anyway it has been interesting I certainly have learned a great deal. I would just like to thank the Nerdkit Forum contributor that helped me out. I would do it all again just for the experience.

Thanks Darryl

March 07, 2015
by Noter
Noter's Avatar

All part of the learning process. I lost count of the things I've built only to see them superseded by something I found on the web. Incredible how much open source software/hardware is out there, especially for the Arduino environment. Don't fret, all you've learned will come in quite handy as you go forward.

Isn't the controller software for a CNC mill and a 3D printer more or less the same? I would think at least the hardware is compatible between the two. Here's a list of controllers for 3d printing, Arduino Firmware, some of them say they support cnc milling in their descriptions.

As for hardware, most are using an arduino ATmega2560 with a RAMPS shield and A4988 drivers. It's easy to source the components separately but pretty hard to beat the kit price.

So it seems for around $30 - $35 you could be on your way with open source Arduino solutions.

arduino

March 09, 2015
by sask55
sask55's Avatar

I am looking at a few options to buy some hardware , firmware and related CNC software. I have fairly large stepper motors. They are rated for 3.0Amps/phase. The ebay deal you have found is interesting. The controller chips (2A) would not be rated high enough for my motors. There are certainly others available.

I will see what my time will allow. I think I will attempt to get a basic CNC system up and running before I even attempt to making any possible modification. I like the open source firmware approach. If I ever get around to it I could investigate the possibility of maintaining some of what I have working now.This approach would have a purchased system doing the mill movements and possibly a simplified version of my system providing feedback to the software. It may be possible to use my digital calliper axis position readings and axis movement limit indicator switches as feedback to the control. The callipers themselves transmit their readout relatively infrequently so the best I could hope to achieve would be a periodic verification of the mills head location. The callipers have no absolute position capability they only return the relative reading measured from their last reset or start-up position.

I could be wrong, but it seams to me that one factor that may be a consideration between a mill and a 3d printer would be the resistance or force required to move and axis. The printer head would essentially be always moving freely. There should be no physical obstruction causing added torque on the axis movement motors. A mill on the other hand will often have considerable variation in the torque required to move the table depending on a number of factors, feed rate, properties of the material being milled, the condition of the tool, spindle speed, ext. For this reason alone I think it is much more likely that a mill may stall or miss steps under higher torque conditions. I think a system that verifies movements and stops the CNC if something is outside of spec may prove to be useful at times.

I have always had an ample supply of ideas and projects on the go, getting them done is another thing.

Anyways thanks for the tips.

March 09, 2015
by Noter
Noter's Avatar

Maybe you could skip the RAMPS shield and A4988's and use the ATmega2560 with the drivers you currently have. That would still allow using one of the open source firmwares and cost even less. You can get a 2560 on eBay for about $12 if you don't mind the slow boat from China.

I find my nerdkits experience very helpful when looking at or modifying arduino libraries. Sometimes a fix is needed and others just a slight change for my purpose. Takes a bit to get used to the arduino way but under the covers it's the same ATmega programming we learned on the nerdkit. I think the nerdkit experience gives a big advantage compared to knowing only the arduino.

March 10, 2015
by Ralphxyz
Ralphxyz's Avatar

Darryl, you might want to consider a Smoothieboard.

Smoothieboard

Buy one

It is all open source and the developer is real helpful.

You need to add heavier drivers for your steppers but that has been done so there are people that would help you in the forum.

Post a Reply

Please log in to post a reply.

Did you know that you need to think about wires differently when you're transmitting signals more than a few inches? Learn more...