Welcome guest. Before posting on our computer help forum, you must register. Click here it's easy and free.

Author Topic: How to do TDD with hardware  (Read 2231 times)

0 Members and 1 Guest are viewing this topic.

Cameron054

    Topic Starter


    Newbie

    • Experience: Beginner
    • OS: Windows 10
    How to do TDD with hardware
    « on: November 26, 2018, 12:17:01 AM »
    All the projects I work interface to a piece of hardware and this is often the main purpose of the software. Are there any effective ways I can apply TDD to the code that works with the hardware?

    Update: Sorry for not being clearer with my question.

    The hardware I use is a frame grabber that capture images from a camera. I then process these images, display them and save them to disk. I can simulate all the processing that takes place after the images are captured by using previously captured images that are stored on disk.

    But it's the actual interaction with the hardware that I want to test. For instance does my software cope correctly when there isn't a camera attached, does it properly start and stop grabbing etc. But this is so tied into the hardware I don't know how to test it when the hardware isn't present or if I should even be trying to do this?

    2nd Update: I'm also looking for some concrete examples of exactly how people have dealt this situation.

    DaveLembke



      Sage
    • Thanked: 662
    • Certifications: List
    • Computer: Specs
    • Experience: Expert
    • OS: Windows 10
    Re: How to do TDD with hardware
    « Reply #1 on: November 26, 2018, 09:18:04 AM »
    I work with equipment and software that has frame grabber functionality for the USPS where mail rushes by a camera and with an encoder (timing/tracking) and break beam light barriers its able to get the timing right to capture the start and stop of the leading edge and the end of every mail piece to take a full to scale picture of it from a camera that has a slit in its aperture plate. The picture goes from the camera through frame grabber cable to a frame grabber card to which the software works very very fast to save the picture of one mail piece and grab the next that is about 3 inches behind it and pinched between belts moving around 23 MPH. There are all sorts of calibrations and fine tuning that I do as well as fun troubleshooting to make it all work. When its wrong the problems can be a blur capture or crushed picture if capture rate is wrong or speed of the mail going past it is too fast or too slow indicating possibly a servo issue or a bearing that is dragging down a section of the machine making it slower than it should be running. While I have access to making changes to the hardware and software settings, I am limited in that the hardware and software was engineered out for them. I understand how it all works which allows me to getting it to run properly and find the problems.

    In your statement/question of:

    Quote
    But it's the actual interaction with the hardware that I want to test. For instance does my software cope correctly when there isn't a camera attached, does it properly start and stop grabbing etc. But this is so tied into the hardware I don't know how to test it when the hardware isn't present or if I should even be trying to do this?

    To me it seems as though you really need the hardware in front of you to test to see what it will do. From a programming standpoint you could add a routine that checks for hardware present, however this would be likely a one shot and then moving on to the rest of the program that is a loop of image capturing and possibly OCR functionality as well such as our frame grabber hardware is connected to a powerful OCR with a cluster farm of OCR Accelerators so that the Dual-Xeon Processor Server that is handling the frame grabbing and image creation from the capture can cast the image out to the cluster and get results back faster as to where a piece of mail will sort to when the decision to where its getting sorted to has to happen in less than 500ms no matter of how sloppy someones handwriting is etc to which there are rules to try to use the zip code as an identifier or if zipcode missing or sloppy to be made out the address itself is then ruled out as to what zipcode it likely associates with which takes more processing power and has to all happen in less than 500ms from the time that the piece leaves the camera to sort correctly.

    As to if the software will stop gracefully if someone disconnected the camera from frame grabber card vs crashing in an unusual way. This all needs to be tested with hardware present. Then do something like that and see what happens. If the result is bad then go back into the code and add a heartbeat to the camera connection or something that will pause/stop the program when the camera is instantly missing. However this would from a design standpoint eat up speed in how fast it can process information if the heartbeat is eating up image transmission time. If the camera itself has an internal buffer then possibly a heartbeat wouldnt affect it much because the buffer from the camera can send the image data to frame grabber card between the heartbeats.

    Sorry to probably not be much help with this. But the best answer to your questions from what I see is to actually have the hardware present to see exactly how it will handle conditions that your concerned with. If whatever machinery ect doesnt exist yet and your engineering something new then you will have to build up a simulator that can simulate conditions similar to however it will be run which come at a cost in the engineering process, but many bugs and problems are best discovered and corrected this way.