Running Hamamatsu Orca Flash 4 code

Hello, I came across the Orca Flash 4 example code while trying to find a way to process images taken from that camera using Opencv. Hamamatsu has an SDK but modifying the source to process images with Opencv (take stdev/mean of 5 pixel kernel in real time) is a bit more than I can muster. As for running the Hamamatsu python source on this site, what modules need to be installed for that to work? Do you ever program using python in Visual Studio? I really like the intellisense but there might be other issues with getting everything to work in Visual Studio that makes it less than ideal.

Hello Jeremy!

Since the code was written, a lot of time passed, and Hamamatsu released a lot of documentation on their DCAM-API (which was not available when I worked with the code.) If I’m not mistaken, you can make OpenCV work on any array, therefore, you can grab the image from the camera and pass it to the std/mean functions you want to you.

What code did you find that brought you here? I will base myself on this example. The important class in that file is HamamatsuCamera, especially you can see that the method getFrames reads the camera and returns a list of all the frames available.

Each frame is a numpy array and thus you can work on it as you prefer, also with OpenCV. In order for the program to run, you will need to have installed DCAM-API. The importing of the library happens on this line, so if you have problems after installing (perhaps newer versions of the library work differently), that is the line you have to check.

Regarding Visual Studio (or any other IDE), they are great tools for developing code and testing it. However, if you want to run your programs, it is normally wise to do it from the command line. IDE’s consume a lot of resources and may add a bit of overhead that may ruin your program (especially if it is time sensitive as acquiring from a camera). I normally use Pycharm, which is free for educational purposes, or provides a community edition. My experience with VS was also positive, but it didn’t make any sense changing from one IDE to another just because.

Hope I’ve managed to point you into the right direction! Let me know if there is anything else!

Hello Aqui,

Yes, your reply is very helpful, thank you.

I’m sure you have seen the DCAM SDK for that camera and I can run it without issues. The problem starts when I try to figure out where in the Excap4 sample code to transfer frames to an array with OpenCV to then process and then how to change the GUI to make it easy to display and save processed images. C++ code is a bear for me to follow, even debugging line by line. So it seems that starting with your python sample code (the Hamamatsu code you linked is what brought me here) might be easier for me to use as long as I can keep the overhead low.

I will make sure the DCAM-API is installed and see if I can get it running.

Great work on the articles/tutorials! I’m very interested in the GUI and driver development.

1 Like

It’s been a while since last time I checked the Hamamatsu code, and I don’t have access to a camera now to test. I’ve seen the Excap4 and it seems a fairly complex program, which I’m not sure is going to help you more than confuse you.

Are you developing anything which is very time-critical? I’ve managed to deal with >500fps with the class in the link. And I believe OpenCV is highly optimized for the kind of operations in which you are interested, so I think it is a good match.

What you have to understand is that once the data was transferred from the camera to the computer, you will have very little (negligible) if you call an OpenCV function on a numpy array (the frame of your camera). You won’t be transferring data to other memory space nor anything. If you have some code to share, it would be great.

In the file that you found, pay attention that at the end you have some examples on how to run it. Especially, if you change the 0 for a one in this line you will acquire data from the camera. You can then change this other line in order to process the frames with OpenCV and then you will have a very quick working test.

When you are ready to build a GUI, you can check this article that deals with a camera, so I think it can be a good match. Cameras generate a LOT of information very quickly. If on top you need to analyze or transform the data, you will run into performance issues. I’ve written a tutorial, which is fairly advanced, but it is the foundation of how Pynta works.

Would you mind explaining what your experiment is?

Yeah, the Excap4 definitely does more to confuse me, you are right about that.

I’m trying to do laser speckle contrast imaging to record surface blood flow (for example see this article https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799684/ ) where I process a 5x5 pixel sliding window of standard deviation/mean in real time (say 10fps) and save them. I’ve done this using OpenCV with a Dalsa 1M60 camera and EPIX framegrabber/programming library http://www.epixinc.com/products/xclib.htm with an SDK that is much easier for me to follow. Now I want to get up and running with the Hamamatsu camera which seems very doable with the python code you posted.

1 Like

I’ve run into a snag getting this example code to run. Installing pyQt4 has been really troublesome but I can install pyQt5 no problem. I updated the code for pyQt5 except I’m having a bit of trouble with SIGNAL in the _session.py file in the Model/Cameras folder (I updated to pyqtSignal but apparently ‘Updated’ nor ‘New’ are supported as pyqtSignal argument types). Can you comment on how to deal with something like this where you might have to migrate code from an older version of pyQt or do you not recommend doing that because of all the problems that will potentially cause?

Did you manage to acquire some data with your Hamamatsu camera (regardless of the user interface)?

Regarding Pyqt4/5, I have to be honest and tell you that I made the mistake of sticking to PyQt4 for too long. I was reusing code, etc. and in retrospective, that was a huge mistake.

Some of the differences between PyQt4 and PyQt5 are just cosmetic, such as some imports are now from different modules (from PyQt5.QtWidgets import QApplication used to be from PyQt4.QtGui import QApplication). That is easy to fix. However, PyQt5 has dropped support for old-fashioned signals and slots. You can read a brief summary here. So, I would keep working in porting the program to PyQt5 if possible.

I couldn’t find the _session.py file you refer to in the Model/Cameras. Would you put a link to the file in the Github repository? Also, it would be very useful if you copy the error you get while running the program. If you make the changes to UUTrack publicly available on Github, it is also much quicker for me to check and point you in a specific direction.

Yes, I have been able to verify that the camera is set up and capturing images in several programs, including the DCAM SDK example programs using C# and C++ (Excap4) that I compiled using Visual Studio 2019.

It sounds like I am on the right track trying to update to pyQt5. Like you mentioned, there are mostly easy fixes. It seems that figuring out what to do with the signals is going to take some digging.


lines 65 and 68 are the SIGNAL calls. I tried to just make them pyqtSignal calls but, as you mentioned, they don’t work the same anymore so I need to figure out how change the code and keep the functionality.

I’ll try to make changes available publicly available on Github. Seems fairly intuitive with Visual Studio so we’ll see how it goes.

I was actually asking if you’ve managed to run the code using the Hamamatsu Python wrapper.

To update that part of the code, you will need to define the Signal explicitly, I would add it right before __init__, like this:

Updated = pyqtSignal()

And then line 65 becomes:

self.Updated.emit()

You will have to do that for every signal (not only on the _session.py file).

Then, you will need to update the other part of the code, which is where you connect signals to slots. For example in here there is a line:

QtCore.QObject.connect(self.viewerWidget.startButton,QtCore.SIGNAL('clicked()'),self.startCamera)

Which should become:

self.viewerWidget.startButton.clicked.connect(self.startCamera)

It’s a bit of work, but once you get the gist of it, should work.

It was so long since I wrote UUTrack, I had even forgotten how convoluted the _session.py had become. Don’t take that approach anymore fortunately!

You can also check PyNTA, which is under development (and if you are newer to Python is not a good starting point), but there I use the new syntax for signals and slots.

I’m trying to take a step back and do what you suggested, which was to run the code as-is, but I’ve hit some snags. Since you said you use pycharm I installed the edu version but have trouble installing the requirements from the requirements.txt file, both when using pycharm as well as visual studio. PyQt4 has been especially challenging to install in either IDE. Any suggestions?

Hi jjsword,
I had the same problem with installing PyQt4. I solved it by using python 3.6.1 instead of the newest version of python 3.7. For python 3.6.1 it is possible to install PyQt4 without too much trouble (I am not completely sure about which method I used, either pip or a wheel). Using an older version of python has it’s own problems but I hope this can help you.

Thanks for the feedback!
To follow that approach, a great resource website is:

https://www.lfd.uci.edu/~gohlke/pythonlibs/

where you have plenty of precompiled wheels for Python.

If you manage to make the switch to PyQt5 (which I completely advise), it would be great to see the code!