Is this email not displaying correctly?
Click here to view it in your browser.

StarFish Medical logo
StarFish Medical November Newsletter featuring Industry Insights,
 StarFish Speakers in November, BIOMEDevice San Jose,
Avoid Common Mistakes, FDA Update, Clean Room and A Bug in my SOUP  

Message from the President

Just after our October issue was published we learned that StarFish Medical earned spot 78 on Business Vancouver’s 2013 list of Fastest-growing companies in B.C. Companies are ranked by percentage growth in revenue between 2008 and 2012. I want to thank our employees, advisory board, and customers like our readers for driving StarFish revenue growth over the five years.

This issue kicks off my video series on Avoiding Common Mistakes for medical device companies.

Industry Insights, November speaking engagements, Free BIOMEDevice San Jose Passes, FDA Update, Partner News and Kenneth MacCaullum’s timely Toyota SOUP blog round out this issue.

As always, thanks for reading.

Scott Phillips, President


Toyota, SOUP and Medical Device Development

Toyota’s recent $1.5 million loss in an Oklahoma court illustrates it’s pretty much impossible to write software without some third party code creeping in. IEC62304:2006 calls this “Software of Unknown Provenance” or SOUP, referring to software with unknown safety-related characteristics, or developed under an unknown methodology. 

I know the situation very well. Operating Systems, code libraries supporting the CPU, or even artifacts created by the compiler have all led to chunks of code in my medical device applications that I didn’t write and don’t know for certain are safe.

READ MORE »


StarFish Medical Speakers in November

Vesna Janic, StarFish Medical and ViVitro Labs Director of Quality/Regulatory, will speak about the differences between Good Laboratory Practice (GLP) regulations and Good Manufacturing Practice (GMP) regulations at the Pacific Regional Chapter of the Society of Quality Assurance (PRCSQA) 2013 Fall Training November 13-14, 2013 in South San Francisco, California.



Martine Janicki, Starfish Medical PMO leader, will speak on the versatility of science and technology training and  value of engineering analytical thinking at Island Women in Technology (iWIT) November 20, 2013 at the Empress Hotel, Victoria BC.

Dave Dobson, StarFish Medical Director of Business Development, will attend the 2013 SoCalBio Investor & Partnership Conference November 6, 2013 in Los Angeles.
FDA Launches New eCopy Program Webpage; updated eCopy Program Guidance
New Initiative to Support Technology Commercialization and Business Growth in Western Canada
Free passes to visit StarFish Medical.
Booth 402 at BIOMEDevice San Jose
December 4-5!
Partner News

 

 

 
Clean Room Update

 

 
Medical Industry News

 

Biolux Research Ltd, the developer of Light Accelerated Orthodontics (LAOTM) technology is pleased to announce the publication of results from a multi-centre clinical trial of its extra-oral OrthoPulseTM system.

Boreal Genomics secures $18 Million Series C Financing.  Financing will be used to expand commercial operations in life science research market and launch clinical applications for non-invasive tumor profiling.

 

Launching November 1, 2013, the StarFish Medical Class 100,000 Clean Room, with capabilities up to class 10,000, is part of our expansive facilities for product design, development and manufacturing. Jason Dolynny, Director of Manufacturing: “The clean room allows a wider range of services. In addition to regular services, we can deliver custom prototypes requiring a clean room manufacturing environment and meet requests for higher volume medical disposables.”

 

The Hidden Danger of Poorly Controlled Suppliers
Used parts ended up in a prototype aircraft.  Aviation is highly regulated, this is a surprise, and reminds us all to monitor suppliers carefully,  perform thorough incoming inspection and test plus final acceptance testing, before releasing and shipping products.

Motorola takes on Modular Cell Phone concept
The article specifically mentions pulse oximeters.  We'll wait to see the FDA’s view of all this…




FDA Launches New eCopy Program Webpage; updated eCopy Program Guidance

Since January 1, 2013, applicants have been required to provide an electronic copy (eCopy) of their medical device submissions along with their paper copy. The eCopy Program has improved the review process by allowing the immediate availability of an electronic version of a medical device submission for review by FDA staff, but may not yet be well understood by all applicants.
 
In an effort to reduce errors made in submitting an eCopy and address questions raised by applicants, the FDA launched an eCopy Program webpage and updated the eCopy Program guidance to include the following: points of clarification, a summary of steps for creating and submitting an eCopy, more examples to demonstrate the different technical standards, and changes to the PDF required technical standards with regards to embedded attachments/attributes and security settings.

 If you have questions about the eCopy program, please contact the eCopy Program Coordinators at CDRH-eCopyinfo@fda.hhs.gov or 240-402-3717. 
Center for Devices and Radiological Health
Food and Drug Administration



Toyota, SOUP, and Medical Device Development: (cont'd)

How to avoid this unknown code?  Here’s what I used to do: write my firmware from scratch in assembly language. Even then I relied on the assembler to correctly map to machine code and the processor to be bug-free. I was so close to the metal it's unlikely there were any hidden surprises, at least by the time I tested and debugged.

Although I once prided myself in writing clear and well-structured assembly, I don't expect I’ll ever do that again. There's a huge advantage writing in a higher-level language and the market expects a level of sophistication in user interfaces and connectivity that's not feasible to code from scratch.

So, how do I prove to myself and to the regulatory bodies my code is safe when I only write a fraction of it?

Some say using a commercial set of tools and libraries helps; others feel widely used and tested open-source solutions are the best answer.  Either way my code base is largely developed by someone else and I feel like I'm relying a bit too heavily on faith.  Neither is the answer.

What can I do to ensure my code is safe? First, let's define safe. For this discussion it’s code which has a high probability of correctly performing risk-of-harm mitigations. This assumes I've done a good job of identifying potential hazards. Using that definition, I'll rephrase the question.

How can I ensure that the probability of failure of my hazard mitigations is exceedingly low? Here are some strategies to gain confidence that my mitigations will not fail:

If I can, I lock down the particular version of development environment and all the libraries I have chosen. Don't underestimate the chances of bugs creeping in with a new version of compiler or library. If I've spent the bulk of my development time with a particular tool chain; I've also been building confidence and experience with it. Benefitting from the fruit of all that testing is one way to reduce development time and cost.

I protect the scope and priority of critical variables and methods. I encapsulate that code in its own process or thread if appropriate. I keep intermediate and state variables private so they can't be interfered with by other code. If variables contain particularly important data, then redundant storage and error checking may be good strategies. If necessary, I block interrupts and threads during critical code sections. I use watchdog timers to ensure code is serviced sufficiently frequently. One option is to run code on its own processor. It's important to clearly define critical code as separate software items in the architecture: this allows a reduced level of testing for non-critical code and helps create the isolation required.

I trap for all known edge cases. This includes the obvious steps of bounds-checking arrays and protecting variables against overflow and underflow. Consider adding data quality metrics and defining what actions to take when metrics don't meet their passing thresholds.  Be aware that the quality checks then become part of your mitigation and must be verified themselves. Also keep in mind that it may not be failure of a particular computation that leads to it producing incorrect result; it could be another thread, interrupt, process or the OS itself – if you’re using one - messing with memory or CPU time. This becomes even more likely as you push your processor to the limits of memory or speed.

Ideally, I do tons of testing. The whole point is to adequately measure the probability that a mitigation will fail and show it to be sufficiently low. A goal is to test the complete set of possible inputs. Frequently this is not possible due to the vast extent of an input dataset, such as found in ultrasound systems or any technology involving the solution of ill-posed problems (in a mathematical sense). Here I am forced to rely on testing as wide a swath of the input space as I can, then hope I haven't left any significant dirty corners untested. I often do this with synthetic data; however, this data is only as good as my understanding of how the real data varies. I also use plenty of real datasets to try and catch any gaps.  Performing this testing at a unit-test level, although convenient, often does not test the code in its natural habitat. Still, with careful thought and planning this may suffice. Another option is to build the test vectors into the final program.

Ultimately - and this adds a bit of a paradox to the whole process – sometimes a better, safer solution is less mitigations. Every additional line of code I add while attempting to mitigate all the known and foreseeable risks, is another line of code to debug and de-risk. This not only adds more opportunity for errors but also distracts me from ensuring the rest of the code is risk free. If the probability of a risk leading to harm is very low, then it may be better to pay attention to those that have a higher probability instead.

These are a few of the strategies that work and that I use regularly depending on the circumstances. I would be interested to hear about others used by readers.

Kenneth MacCallum, PEng, is a Principal Engineering Physicist at Starfish Medical.     He works on Medical Device Development and prefers his SOUP as a meal instead of a medical device component.