The Singing Stepper: A DRV8818 Primer

The code for this project can be downloaded here: MIDI_2_STEPPER_MSP430F1612_10-25-2012

I am not the first one. Many more, and before me, have used a frequency signal to feed into a stepper driver which then in turns lets the stepper sing a tune. I think the MakerBot guys did it with the Mario Brothers tune, if I am not mistaken. I could be mistaken, though, and it could have it somebody else and a completely different tune, but that is irrelevant. The point is I was not the first.

Maybe, on the other hand, I was the first to do it with a DRV8818 bipolar stepper driver capable of supplying up to 2.5A (although in a near future I will teach you how to get up to 3A from it!) and 8 degrees of microstepping. Again, it is irrelevant because chances are what you want to know is how on Earth can I make your stepper sing? I am not going to tell you this is the ultimate tool to get that supremely hot looking lady in your class room to fall for you, while you serenate her ala stepper, but you can always try hitting her with a NEMA 23 over her… OK never mind! LEGAL DISCLAIMER: I am not responsible if you decide to practice neanderthal romanticism moves on your future girl friend (or restraining order holder) with a stepper or any other item this article may inspire you to use.

Making Your Stepper Sing:

The DRV8818 (like the DRV8811, DRV8821, DRV8824, DRV8825, DRV8834, etc) is a bipolar stepper motor driver with an internal indexer. What this means is that all bipolar phase commutation is carried internally by a logic core in charge of regulating degrees of microstepping (as requested by the USM bits), direction of rotation (as requested by the DIR input) and stepping rate (as commanded by the STEP input). Crucial to understand is that whatever square wave frequency that I apply at the STEP input, will translate into a MS:1 stepping rate at the output. WAIT! What is that MS:1 stepping rate? Well, if I use full steps (or 1 motor shaft step output per each STEP rising edge transition at the input) my stepper rotating frequency will be 1:1 with the STEP input frequency. If on the other hand I use half step commutation, or 2 degrees of microstepping, then I will get 2:1 stepping rate, or the need of two STEP transitions for my stepper to move 1 full step. The same applies to quad and eight degrees of microstepping.

All of this explanation is kind of  useless for this project as I can tell you in advance I will be using full step (or 1:1 rate) on this project. However, I think it is worthwhile understanding why this selection was made.

Microstepping is a mechanism imposed into a stepper winding to get a softer motion out of the motor. The theory is that if I move my shaft a lot, then the motion will be ridden with vibrations which in turn translate to motion inaccuracies, audible noise and possible stalling conditions. If we use microstepping, on the other hand, the higher the resolution imposed, the smaller each step motion is. This gives us a softer motion with less vibration and then less audible noise.

As  you may have guessed by now, as we try to make our stepper sing we are actually wanting to get that vibration and audible noise out of the motor, right? CORRECT! Hence, microstepping is not what you want to use as you attempt to sweep your Dulcinea off her feet. In fact, for this project we actually need to try and abuse the concept of motion quality. We may be thinking why on Earth would we want to do that, but as we Engineers often like to quote “Because we can!”. I also like to point out, however, that if you learn to break an application on purpose, it is quite possible to learn how not to break it. This application is a great example of how we can make the motor noisier. It then stands to reason that if we do the opposite, we should be able to accomplish the opposite as well, AKA softer motion and the one we should use on real projects! (Real project definition: the ones you work with so you can maintain a family consisting of a sane Don Quixote, his respective Dulcinea and possible offspring).

By using Full Steps we will get the worst kind of vibration we can out of our motor. Coupled with large currents, the vibration will be exacerbated and our song will become audible. But how do we embed the song into the motor? As I explained before, if we use full step commutation, a STEP frequency translates into the same frequency at the output. In other words, if I want my 440 A piano note played by the stepper, all I have to do is feed a 440 Hz square wave into the stepper and VOILA! The stepper will rotate at 440 steps per second and the audible frequency should ressemble the 440A on any musical instrument such as a guitar, piano, or egomaniac stepper.

So first step is to gather the note to frequency information and for that I went to the Wikipedia web page detailing the conventional musical notes and their respective frequency. You can find that webpage HERE.

Now, all of this information is quite useless if we do not have a means to generate the respective frequencies. There are many ways to do so, but my favorite is always to use a microcontroller. In this case I employed one of my favorite MSP430’s, the MSP430F1612. To be honest, I am using almost no resources on this medium to large sized microcontroller, but I have so much code for it, it made the project a few hours long. In this case, the resource I am using to generate the STEP frequencies is a timer configured as an output compare which will get the timer control register (TxCCRy) updated with the timer value required to generate the frequency in question.

Here is the first programming requisite. The Wiki tells me the frequencies I need to generate a particular note, but we must transform these frequencies into timer clock counts so our internal timer can actually generate said frequency. Using Excel as my tool to  determine what the clock count per frequency has to be, I generated the following file:MIDI Notes, Frequencies and Timer Clock Count LookUpTable

I hope looking at the spreadsheet does not cause a brain aneurism, but basically what I have done is taken each note frequency and obtained its inverse so that I can see what the period is, and then use the internal oscillator frequency (8 MHz) so that I can determine how many timer clock cycles I will need to get the respective note out. The timer will be configured to toggle, so I will actually need two of these periods to generate the actual frequency (one for causing the rising edge, and the other for causing the falling edge). One last note before I proceed. As you can see on the first spreadsheet tab, there is an offset going from 0 to 127, associated with each note. For example, A 440 is offset 69. I will explain why this is important in a little bit.

You can also see on the first tab, that a region is marked in red. The reason why I marked it as so, was because these were notes for which I would have required a timer overflow in order to generate the respective frequency. Since my timer is a 16 bit unit, all I can do is count up to 65535. At 8 MHz, that allows me to reproduce the B1 note at ~61.74 Hz. Any frequency lower than this, however, would require me to scale the internal timer down. Although feasible, I decided just to skip these notes as I was not going to need them for my “art piece”.

The second tab on the spreadsheet has the resulting 16 bit numbers I will use to fill a 128 word deep lookup table named MIDINotesArray[128]. This lookup table is the one I will be choosing from in order to fill the timer register with the information required to generate the commanded frequency. Notice the “illegal” notes were defined as 65535 or saturation. Again, I will not be using these notes so that is irrelevant in this case.

Now wait… Did I say “the commanded frequency”? Who the heck commands the frequency? Here is where the song making comes into fruition and why the note offset is important. The easiest way to get any tune recorded and reproduced on a project of this nature (and great majority of music projects nowadays) is to use the MIDI protocol. In MIDI, there is no such thing as notes or frequencies. What you have, on the other hand, is a serial protocol sending commands to turn “something” ON or OFF, as well as many others. Yes, this supposed “something” happens to be called a NOTE, but I can assure you there is no frequency information associated with it. As a result, this alleged NOTE can be anything. You could have a dog bark or some lighning noise as a response to any of this commands. Some people actually command XMAS lights as they massively ornament their homes into the most impressive light based orchestra! It actually doesn’t matter. What MIDI cares for is information related to which NOTE offset will be issued and the characteristics (volume, duration, etc) of said NOTE activity.

So my MSP430 has a lookup table with the timer information necessary to issue a frequency. How do I connect these two together? What I did was code a simple serial communications Interrupt Service Routine which reads the bytes sent by a MIDI output and uses the information on the MIDI package to tell the system which offset from the lookup table should be loaded into the timer. Here is the super simple ISR function:

#pragma vector=USART0RX_VECTOR
__interrupt void USART0_Receive(void)
{
  int tempBuffer;
 
  tempBuffer = RXBUF0;
    if (tempBuffer & BIT7)
      {
      SerialPointer = 0;
      }
    else
      {
      SerialPointer ++;
      }
   
    SerialBuffer[SerialPointer] = tempBuffer;
    if (SerialPointer == SERIAL_BUFFER_LENGTH – 1)
      {
      MessageComplete = true;
      }
}

When I receive three bytes from my MIDI output, I set my message complete flag and then the other piece of code comes to work. Before we go into that code, one thing to understand about MIDI is that everything is 7 bits based. That is because the 8th bit is used to denote the first byte. If bit7 is set, then it is the opcode byte. If bit7 is set, then the byte is data or parameters. Hence, as you can see on the code above, any byte having its bit7 set, must be the first byte. I use this information to clear my serial buffer index and start a reception. You can check more on the MIDI protocol HERE, although all you have to do is type MIDI in google and you may get more responses than there are neurons on any given human brain.

The following code is executed every 250 us as part of my multi tasking RTOS, so it only executed when in fact a message has been received (MessageComplete == 1).

if (MessageComplete)
  {
  MIDICommand = SerialBuffer[0] & 0xF0;       //Extract Command Information
  MIDIChannel = SerialBuffer[0] & 0x0F;       //Extract Channel Information
 
  if (SerialBuffer[2])
    {
        TBCCTL1 &= 0xFF1F;
        TBCCTL1 |= TB_OUTMOD_TOGGLE + CCIE;
        SteppingRateTMR[0] = MIDINotesArray[SerialBuffer[1]];
        TBCCR1 = SteppingRateTMR[0];
    }
    MessageComplete = false;
  }

I have removed the portion of code which acts on which channel is commanded to react, as at the end I only used 1 channel. MIDI, on ther hand, can handle up to 16 channels. My idea was to have 16 steppers singing together but the mess was so strenuous I decided to go with a solo act.

So that’s pretty much it! Well, is it? I guess I need to mention I recorded my song on a Mac MINI using Logic Studio and a keyboard controller and then played the performance back through the MIDI out. You can actually use any MIDI source, so my setup is by no means the only plausible method. So we hit play and?

The MSP430 receives serial commands at 31.25 KHz , interprets them 3 bytes at a time, and then uses the information in the three bytes to extract frequency information from a lookup table which is then fed into the timer register which generates the square wave with the right frequency into the DRV8818 STEP input so that the same frequency is used to rotate the stepper and we get this:

 

6 comments for “The Singing Stepper: A DRV8818 Primer

Leave a Reply to James C. Cancel reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.