Roy and Niels

Roy and Niels

Saturday, July 30, 2011

Cloud computing in medical physics*: A snapshot - July 2011


These days if you do almost anything with a computer hooked up to the internet, you’ve probably heard the term “cloud computing”. Well I’m here to tell you that cloud computing actually does mean something and that it is something useful for medical physics! In this post I’m going to take a quick look at the current state of cloud computing research in medial physics and how it got there.

First things, first, what do I mean by cloud computing? Cloud computing generally refers to scalable computing resources, such as data storage, CPU time, or software access, offered over the internet with a pay-as-you-go pricing scheme (and sometimes free). This service can be provided at a few levels: raw server level access (infrastructure as a service, IaaS), pre-configured systems for specific tasks (platform as a service, PaaS), and software as a service, SaaS, e.g. Gmail or Dropbox. The interesting thing about cloud computing is that, suddenly, anyone armed with a network connection, a credit card, and a little know-how can have access to an unprecedented amount of computing resources. This opens up a huge number of computing possibilities for medical physicists, among others.

Starting in the second half of 2009, our research group at the University of New Mexico, the UNM Particle Therapy Group, started investigating using IaaS-style cloud computing as the basis for massively parallel medical physics calculations (read, very fast, large calculations). Our very first results, demonstrating proof-of-concept with proton beam calculations on a “virtual Monte Carlo cluster” of nodes running on Amazon.com’s EC2 service were presented at the XVIth ICCR conference in Amsterdam in May, 2010. We presented a poster of our second set of results at the AAPM annual meeting in July, 2010 and then posted a paper to the arXiv titled “Radiation therapy calculations using an on-demand virtual cluster via cloud computing” http://arxiv.org/abs/1009.5282 (Sept. 2010).

It has been exciting to see the reactions to our posters, talks, and arXiv print, with most physicists immediately seeing the potential benefits offered by the new levels of access to computing resources offered by cloud computing services. Even more exciting is to see the projects subsequently launched by other medical physics researchers.

So what’s happening now?
  • Chris Poole, et al (Queensland Univ. of Tech.) posted a note to the arXiv titled “Technical Note: Radiotherapy dose calculations using GEANT4 and the Amazon Elastic Compute Cloud” http://arxiv.org/abs/1105.1408 (May, 2011)
  • Chris Poole also posted code used in that project to http://code.google.com/p/manysim/ (May, 2011)
  • UNM held a workshop that included tutorials on using cloud computing for medical physics calculations. Sponsored in part by Amazon.com. (May, 2011)
  • UNM launched a cloud computing webpage in conjunction with the workshop: http://www.cs.unm.edu/~compmed/PTG/cloud.html (May, 2011)


Compared to our single abstract at AAPM last year, you could claim there has been an “exponential explosion” in interest in cloud computing in medical physics since we presented our first results in 2010! Does this mean we’ll see 50 abstracts in 2012 mentioning cloud computing? (Two points does not a trend make?? ;) )

I look forward to seeing where this new technology will take our field.


*I’m really leaving off all of the (mostly commercial) cloud-based PACS and related radiology software and services. Some of these are mentioned in our paper on the arXiv.




Friday, July 1, 2011

Antiproton Radiotherapy Experiments at CERN

In this moment we have a week of antiproton beam at CERN for radiobiology and dosimetry experiments. The main experiment is to measure the relative biological effectiveness (RBE) of antiprotons. As an endpoint we use clonogenic survival of V79 Chinese hamster cells (in vitro).

What makes this experiment so complicated, is :
  • we only have narrow beam geometry available at CERN
  • antiprotons are rare, we only get app. 1 Gy / hour
  • beam is highly pulsed, i.e. a 500 nanosecond spill every 90 seconds.
Therefore, we invest a lot of effort in performing precise dosimetry with multiple redundant systems. This is a long story, which I will tell more about another time. Here, let me just show a few pictures...

The antiproton beam line with a water phantom for dosimetry.
The entire experiment is located in an experimental zone at the antiproton decelerator (AD) at CERN. We share our zone with the AEgIS people, who want to find out if antiprotons fly up or down in the gravitational field of the earth. :)

Franz-Joachim Kaiser messing with the water phantom. Behind him the AEgIS beam line.
Three ionization chambers (ICs) are visible here, from the left to the right: a custom made "Advanced Roos" chamber, a Markus chamber and the MicroLion liquid ionization chamber. All by PTW.
Gafchromic EBT film irradiated with antiprotons. Beam spot is about 1 cm FWHM. The narrow beam geometry makes us very vulnerable to positioning errors...

... and therefore we also monitor the beam from spill to spill with a Mimotera detector.
We are usually 2-4 people on a shift. Tonight I will do the night shift with Franz-Joachim (to the left). Stefan will leave soon. Usually, I would do night shifts with Roy Keyes, but he couldn't be here this year.
I build this little box for the experiment: it interfaces the antiproton decelerator with the printer port of our data acquisition computer. No need for expensive IO cards or fancy LabView. Basically it is just some TTL logic and optocouplers. On the server side, a daemon listens to the parallel port if a new spill of antiprotons is coming in.
Once triggered, the server takes care to read out all data systems, such as beam current transformers, ionization chamber and scintillators.
Client programs, here running on the laptop to the left, can connect to the server, and change various settings of the readout procedure.

Again, this is home-brew. Earlier, the data acquisition was some libncurses based stuff, this year is the first time we had a traditional GUI for the client and a clear client/server separation. I wrote the client in C++/QT4 and compiled it for linux and win32. Stefan did a package for mac. Server is pure C, linux only. Sometimes, I think the most valuable course I had when I was a student at our Physics department in Aarhus, was a C-programming course.  (And that course was only offered once! What a shame!)

Fiona from the Belfast QUB group is in charge of the film scans, and making sure there are enough sweets for all of us.
Entire experimental zone in the AD hall, seen from above, where our non-existant counting hut would be..
A bit off topic, but just over our heads, there is a positron beam line, which delivers the positrons to the anti-hydrogen "bottle" of ATRAP. Positrons go from the right side to the left.

Me, checking up on things... :-) I think this is the 8th or 9th time I am working at the ACE experiment.
Alanine is one of the most reliable solid state dosimeters for such exotic beams such as antiprotons. Here a stack of pellets is prepared for irradiation.

Once the pellets have been irradiated with antiprotons, they are shipped for read out to our collaborators at the National Physical Laboratory (NPL) in Teddington, UK.. (The NPL serves also as a primary standard lab for radiation quantities.)


Alright then.. :/

Recursively posting this blog entry.
More pictures here.


Wednesday, April 27, 2011

The Monte Carlo Race 2011

We saw earlier how different cars (and a box of Lego) could fit the description of various Monte Carlo particle transport codes. Every year there is the prestigious Monaco Grand Prix race (according to Wikipedia, I must stress, I am absolutely not into cars!), and this race passes the famous Monte Carlo region with its casino.



What you may not be aware of is the little known, sister event called the Monte Carlo Grand Prix, the year long publishing competition of Monte Carlo codes. I’m going to give you a run down of the latest results. Get a seat in the grandstand and plugin your ear plugs!


Monte Carlo codes are surging in radiotherapy treatment planning. For a course at the Aarhus University Hospital I had to give [footnote1] a little review of MC codes in radiotherapy. Inspired by a talk given by Emiliano Spezi I did a little study done using the ISI web of knowledge which clearly shows this trend:
Number of publications on “Monte Carlo Treatment Planning”. Data for 2011 are still incomplete for obvious reasons.
For a real race we of course need several competitors. In addition to the Geant4, FLUKA and SHIELD-HIT cars, we will have a few new participants joining the race!

From the country of The United States of America there are the two participants, MCNP and MCNPX developed in the secret labs of the Bombs-R-Us factory (a.k.a. Los Alamos), now released to the public (with a few exclusions) against a modest fee. MCNP helps physicists transporting neutrons as well as photon and electrons. The grand old MNCP brings along its more recent offspring MCNPX which also can transport heavy ions!

Just of north of the US, the Canadian National Research Council funded the development of EGS4/EGSnrc and as a courtesy to our medical minded people BEAMnrc was developed which is particularly suited for simulations of linear accelerators for radiation therapy. Rumours say that this nifty product is based on US technology from University of Wisconsin, ‘traitors’ some might argue, I’d say this is a remarkable example of pan American friendship!

The Japanese people, known for their industrial know-how, diligence and incomprehensible machine translated manuals contribute with this gemstone of technology: the PHITS multi purpose particle transport vehicle... sorry... code, maintained by the Japanese Atomic Energy Agency! Jovial attitude to technology breeds wonderful gadgets, often decades ahead of what eventually will appear in Europe.

Finally, and being among us for a while are the diligent efforts of our European friends in University of Barcelona in Spain. Hola PENELOPE!  PENELOPE can transport photons and electrons, evil tongues may even say much faster and smoother than our Canadian opponents. Let the masses judge how well they perform on the curved streets of Monaco.

Anyway, the race started many years ago and is still going on, let us see the current results:

Number of publications on various MC codes (not restricted to therapy planning)

Clearly MCNP has been roaring through the Mediterranean streets long before computers became mainstream and when 1 MByte was still an unbelievable amount of memory. It would have been a dull race, if EGS4 had not slowly appeared on the horizon. Geant4 was not conceived of when FLUKA already had an early start. Both Geant4 and FLUKA are developed at CERN, which makes their performance particularly exciting. Geant3 does not even appear here, it had a trauma from an earlier rally - it might recover, but bones tend to heal slowly at that age.

The fiery tempered Spanish PENELOPE and the latest ace from the United States MCNPX entered the rally roughly at the same time, and here the race gets really exciting! Let us take a closer look at the last 10 years. All codes are surging, so I will plot it relative to the total amount of publications per year:

The Monte Carlo Particle Transportation Grand Prix, during last 10 years.


It’s clearly a close race between the elderly and not-so-elderly codes. MCNP is clearly losing against its younger offspring MCPNX which also offers heavy ion transport. However, it is difficult to hide that MCNPX is loosing towards Geant4. The good news for MCNP is that it will be merging the MCNPX code base so it will get a boost of sorts! The success of Geant4 is truly amazing! Even if Geant4 cars are complicated, DIY affairs, the possibility to have total freedom with modding your car, giving it personal style, seems to attract a solid customer base.

FLUKA clearly shows a remarkably constant relative user base, going steady, going strong. But woo-hoo... whats happening with EGS4? The medical physicists came out of their closet declaring themselves as BEAMnrc users, and EGS4 is superseded by EGSnrc... 2011 could be one of the last years we hear of EGS4, but no-one will really miss it, we’ve got EGSnrc and BEAMnrc.

Trailing still is PHITS. PHITS users have still not published this year (2011) at the time writing these lines, but obviously I would not be surprised if it suddenly might come into play. Hopefully Japanese physicists will find time to catchup after they fix their issues with their nuclear power plants.

Where are the Russians? I can see smoke coming up on the horizon somewhere behind, I hope it is not a fire, but just a demonstration of the new engine. I am confident the Russian mechanics (with some aid from their friends in Aarhus :-) are working on it, and we soon will see fast SHIELD-HIT Niwas with whining tires and roaring physics engines in the curved streets of Monte Carlo.

The race isn’t over yet, and frankly I am not sure if it is supposed to end at all... but what can be said, some codes will prevail, others may lag and eventually disappear from the race.

See you next year at the Monte Carlo rally 2012 - readers are welcome to place their bets for 2012 below !



[footnote1]
“You have to give a 5 ECTS course in dosimetry.” Yeah right, I thought, it took me almost 4 months to prepare and run it, do all the exams (400 pages + external reviewer), no time for research whatever, looking forward for the day when it was all over. What followed? Research? Guess again: more teaching!

Monday, March 21, 2011

The "ideal" Monte Carlo user interface

If you have read many of our posts, you probably know that radiation Monte Carlo software plays a large role in both Niels' and my research efforts. Niels wrote an insightful and entertaining overview of the available Monte Carlo engines relevant for particle therapy. I'm going to talk about one of the things that most MC users have surely thought about or at least been frustrated by: user interfaces.

I'll start by making a disclaimer. Monte Carlo for radiation / particle transport is not a simple problem. It takes many person years worth of effort to produce a codebase that gives reasonable results. This post is mostly my musings about some ideal user interface. I am the proverbial beggar who also wants to be a chooser :)

Most interaction with MC programs is performed by creating input files of some sort (in Geant4 these are called macro files and can function in largely the same way). Usually these input files allow you to set the majority of important parameters relevant to your simulation (e.g. beam parameters, target geometry, detector geometry, scoring parameters, etc). While initially cryptic (see Fluka and MCNP input files), these text input files are extremely flexible and can be generated programatically or with a GUI. The problem of course, is that your typical input file format is not very intuitive, designed to be machine parsible, rather than human friendly. And as all new MC users know, input files and their syntax are one of the major hurdles in getting up and running.

A Fluka input file.

So what would the ideal MC user interface look like? Radiation physics users of MC engines want to irradiate something and score some quantity. For medical physics users, that usually means irradiating a human or water phantom and scoring dose or fluence. To me the obvious interface for MC codes would be identical to 3D CAD (computer-aided design).


The open source FreeCAD CAD program.

A 3D CAD style interface would put you directly in the geometry of the simulation world. Build your target from simple shapes or import existing geometries (e.g. DICOM files), graphically designate your beam source, type, and direction, and set up your detector geometries. More importantly, you would be able to manage your simulation end-to-end in the interface.

It can be argued that 3D CAD is as hard or harder to learn than a given MC engine. My approach for a user interface would be to expose the minimum useful controls, making advanced options discoverable through menus and configurable with shortcuts.

The follow-up disclaimer is that 3D CAD is also not an easy problem, so we are unlikely to see this soon. In fact many of the MC programs can import geometries from CAD programs (see SimpleGeo), but I'm unaware of any that have attempted to fully integrate a CAD-style GUI as a primary user interface.

What's your ideal Monte Carlo user interface? Leave us a comment and let us know.

Monday, March 14, 2011

How to Produce Antiprotons

Both Roy and I work on the AD-4/ACE project at CERN where we investigate antiprotons as candidate particles for use in cancer therapy. We have about one week of beam time every year where we conduct radiobiological and dosimetric experiments at the beam line in a very interdisciplinary team consisting of physicists, radiobiologists and radiation oncologists from more than 10 universities and university hospitals.

CERN is the only place in the world, where we have a antiproton beam at sufficiently *low* energy, that is, around 100 MeV which corresponds to a range of ~10 cm in water. The LHC is not involved in the production at all. In fact, for antiproton production only a relatively small amount of the CERN complex is used. However, the production is still very cumbersome. First a high energy proton beam must be made. This happens at the Proton Synchrotron (PS), the old workhorse of CERN. It was inaugurated by our great Dane Niels Bohr in 1959.
The proton beam is accelerated up to 26 GeV, and then dumped into a target followed by a so-called magnetic horn.

Antiproton production target.


Basically, it is an air cooled iridium target. When the beam is dumped, two protons are converted into three protons and an antiproton. During the dump a powerful current is sent along the beam axis, which generates a magnetic field, keeping as many antiprotons as possible on axis. Immediately after the target there is a “lithium lens” (a Russian invention), which tries to capture even more of the very precious antiprotons. The created antiprotons have a very high energy of several GeV and are then captured by the Antiproton Decelerator (AD). It then takes more than 80 seconds for the beam to slow down. The deceleration is actually not the time consuming issue, but rather shaping the beam, making it small and narrow, so antiprotons are not lost during the deceleration process.
This is realized using stochastic cooling. Along with electron cooling (which was invented by G.I. Budker, and is widely applied), this will remove energy from the transverse movements of the antiprotons, thereby reducing the emittance of the beam.

Yesterday (while following Dag Rune Olsens twitter account) I learned that Simon van der Meer, inventor of stochastic cooling - and winner of the Nobel Prize, died on 4 March 2011.
From our last antiproton run at CERN I have a large amount of video material of technicians working at the AD, which also demonstrates antiproton production and stochastic cooling of the resulting beam. Check out the excerpt below:


And yes, of all those computers, only one of them was running windows. :)

At 1:49 you can see the AD hall. The antiproton decelerater is under that ring of concrete. Those large coils at the far end, which can be seen at 1:54, are delay coax cables which “short circuit” the AD ring across the middle.
At 2:00 you see the bullseye of the production target and at 2:20 the 26 GeV proton beam hits the target as the antiprotons are made. These are slowed down, and the oscilloscope shows how emittance is reduced by stochastic cooling. The beam is then stepwise ramped down to 126 MeV, and cooled in between those steps. Finally the 126 MeV antiproton beam is extracted.
(I plan to produce more videos about the AD-4/ACE experiments, but currently kdenlive crashes frequently and corrupts my project files. It took me almost one entire day to edit those 4 min of video.)

Here is another video which shows the construction of the antiproton production target and the collector. The white cylindrical object behind the target is the lithium lens.


http://cdsweb.cern.ch/record/1063081

This is one of the “hottest” sites at CERN. Things are designed to require minimal human intervention. Here is a very old video of how a faulty magnet had to replaced near the production target. People have to plan each step in advance before they enter the zone.

http://cdsweb.cern.ch/record/1171261

Monday, February 21, 2011

Notes from the 2011 Geant4 Winter Course Tutorial


In January I traveled to Texas A&M University in College Station, TX, USA to attend the Geant4 Winter Course Tutorial to brush up on my Geant4 skills. Here are some of my impressions from the tutorial. The tutorial was from 10-14 January and, despite being in Texas, was on the cold side. Though hosted by the Texas A&M Dept. of Nuclear Engineering, the tutorial itself was held in the main auditorium of the Texas A&M Institute for Preclinical Studies (TIPS). There were about 100 attendees from all over the world, with a sizable fraction from A&M.

So what goes on at one of these tutorials? As described in Niels' earlier post about the different Monte Carlo "codes", Geant4 is not a program, per se, but rather a toolkit, primarily consisting of a set of C++ libraries and data files. The tutorial was aimed at bringing Geant4 newbies and post-newbies up to speed and slightly beyond. To accomplish this, we attended classes for five days in the stadium style auditorium with lectures covering a myriad of topics. Each day had one or two "hands on" sessions, in which we'd work through examples and have our questions answered. In many ways, the hands on sessions were the most useful, because of the one-on-one help. The actual exercises may not have been terribly useful, but having the time to ask questions with laptop-at-the-ready was when things came together. I certainly asked a few questions that were completely unrelated to the tutorial (but not unrelated to my research!) (Thanks again Sebastian!). The other thing I found extremely helpful was simply having time to discuss things with the Geant4 developers and other users. On the last day there were break-out sessions for medical physics, high energy physics, and on DNA damage. I attended the medical physics session with about 30 other people.

Other random notes:
  • The attendees were a good mix of medical physics, nuclear applications, high energy physics, and others.
  • I met a number of people involved with proton therapy projects as well as a post-doc from Wayne State working on their re-commissioned fast neutron therapy project.
  • To keep small animals outside of TIPS, they had several poisonous plastic rocks outside the building (yes, really).
  • We got a nice tour of TIPS, which included seeing the "most powerful" PET/CT scanner in existence.
  • Catering by Jason's Deli all week
  • Possibility of using cloud computing for distributed Geant4 was mentioned in a lecture by Asai. This is a subject of one of my projects (see arXiv:1009,5282v1 [physics.med-ph]).
  • Looked a lot like a web surfing conference.




Overall it was a good tutorial. We learned that we were taking some wrong (or at least more difficult) paths in our code. Certainly Geant4 is a vast topic and a tutorial like this can be very helpful, if only to meet the right people for when you need to ask questions.

Monday, January 31, 2011

Making Bubbles: The Particle Way

BTI BubbleTech Industries manufacture neutron detectors intended for personal dosimetry. Theses devices contain a polymer gel holding very small droplets of a superheated gas. When a neutron interacts with these superheated droplets, a phase transition happen from liquid phase to gas phase expanding the volume dramatically - a bubble appears.

The video below demonstrates how such a detector responds to a (weak) Americium-Beryllium neutron source:



The activity of the AmBe source was 2.64E+4 neutrons per second. The detectors we had were calibrated against ICRP-60 in terms of dose equivalent, according to BTI. The sensitivity of the particular detector shown in the video above was about 0.7 bubbles / µSv dose equivalent.

The detectors come with an integrated piston which repressurizes them, so they can be reused, however not indefinitely. We used our detectors rarely, and kept them refrigerated. However, after two years the encapsulation/pressurization system leaked.

The bubble detectors can be bought with varying sensitivity ranges and BTI even offers a set of detectors which are sensitive above a varying energy threshold. Deconvoluting the counts in each detector of this set will yield a coarse energy spectrum, in e.g. 6 energy bins.

We have used the these bubble detectors in our antiproton beam line at CERN in order to get a coarse measure of the amount of fast neutrons emitted from the antiproton annihilation. We used both the personal dosimeter type and the BDS spectrometer.



The picture above show how multiple personal dosimeter detectors are places at a certain distance from the annihilation vertex.

Unfortunately we had some trouble interpreting the results from the spectrometer. The BDS spectrometer counts seemed to be simply unphysical and a spectrum could not be deconvoluted. The readings from the personal dosimeter also seemed to be off by an order of magnitude.

After some investigations we started to suspect that these bubble detectors were not only sensitive to fast neutrons, but also to charged particles, such as protons. From the antiproton annihilation we do get a similar amount of protons and a threefold multiplicity of energetic pions, which have a long range, far beyond the position of the bubble detectors.

A paper in NIM B, which was published a few years ago by us, lists our findings. Basically we conclude that the sensitivity (# of neutrons per bubble) is quite comparable to that for protons, and perhaps a bit less for pions. The proton part we could test at our storage ring, ASTRID, which we have in the basement of our Physics Department in Aarhus.

In the video you are about to see, we extract a few million protons at about 50 MeV from the synchrotron. The bubble detector here is immersed in a water bath.



The range of the protons are clearly visible. A distinct Bragg peak does not really form, the effect is primarily related to nuclear interaction cross sections.

Friday, January 14, 2011

Open access, medical physics, and arXiv.org

If you read research papers, chances are you’ve heard the term open access. In this post I’m going to talk about what open access is, the state of open access in medical physics, and what medical physicists can do if they want to make their work open access using sites such as arXiv.org. The quick summary is: the primary obstacle to open access in medical physics is adoption by authors. Most journals are already on-board in important ways. If you want to make your medical physics publications open access, you probably can and I encourage you to do so.

According to our friends at Wikipedia, open access is “unrestricted online access to articles published in scholarly journals”. Open access is generally placed into two categories: “Green” open access and “Gold” open access. Green access is defined as author “self-archiving”, when the author places a copy of a paper on their own site or on an e-print server. Gold access is free access provided directly by journals.

No-fee access has many benefits for researchers. For medical physics, these benefits are potentially greater than for other fields, due to the fact that medical physicists are found in a wide variety of settings with varying levels of paid journal access (i.e. universities, community hospitals, small clinics, etc). Even being located at a large university with a large medical center, I have personally run into access barriers. For example, I can only access Medical Physics with my personal subscription; for a time the library subscribed to the Red Journal, but not the Green Journal; the university has no love for Radiation Protection Dosimetry whatsoever. In the current economic climate I’m not optimistic that institutional subscriptions will be on the increase. Ultimately, open access offers availability of information to all regardless of institutional affiliation or budget. (Also, I hate messing with proxies... :) )

While open access is strongly established in some disciplines, particularly physics, computer science, math, and earth science, medical physics seems to have lagged behind the curve greatly, especially in self-archiving.

“The availability of gold and green OA copies by scientific discipline. The disciplines are shown by the gold ratio in descending order, rather than in alphabetical order.” CCA 3.0. Björk B-C, Welling P, Laakso M, Majlender P, Hedlund T, Gudnason G. doi:10.1371/journal.pone.0011273

The above plot from Björk et al. shows the percentage of publications that are made open access in different disciplines. In some sub-disciples of physics, such as high energy physics, the rate of self-archiving is up to 100%. For reasons unknown to me, medical physicists have not embraced open access, despite the supportive polices of medical physics journals (see lists below). I suspect that medical physicists are largely ignorant of the journals’ policies. If medical physicists want to provide open access to their work, they have options to both self-archive (green) or publish in open access journals directly (gold).

arXiv.org
In 1991, high energy physicists began self-archiving their publications on a site called the arXiv (that X is supposed to be like a Greek “chi”). Since then, the arXiv has expanded to cover all of physics, as well as other fields, such as mathematics and computer science. As the leader in physics self-archiving, the arXiv is a logical destination for medical physicists to post their papers. In fact, the arXiv has a medical physics category. Currently, the medical physics category of the arXiv has very low activity relative to the number of medical physics articles published that are eligible to be posted. (I plan to investigate the posting rates in a future blog post.) While the activity is low, it is encouraging to see prominent medical physics researchers, such as Steve Jiang (UCSD) and Thomas Bortfeld (MGH/Harvard), posting articles. (Thanks!) In fact, one tiny area of medical physics that seems to be very well covered on the arXiv is GPU based calculations in medical physics. That’s probably due to Jiang’s group leading the way in posting their publications.

Medical journals
While medical physics journals all allow self-archiving to servers such as arXiv.org, medical journals related to medical physics seem to be much less enthusiastic about open access (see the list for details). The Elsevier journals allow pre-prints and self-hosted archiving, but the main radiology journals have open access “hostile” policies. The one thing that has seemed to crack journals such as those from RSNA is government mandates. One example is the recent rule requiring NIH funded research publications to be made available as open access on PubMed Central within 12 months of initial publication. This rule has had a wide ranging effect on journals and led to much discussion. Funding agencies in other countries have instituted similar rules.

What does this all mean for you?
  1. If you’ve published in medical physics journals, you can probably make your work open access right now by posting your articles to arXiv.org.
  2. If you are planning to submit an article to a journal, you should read the journal’s copyright policy before submitting and before posting a pre-print (or post-print). Some journals have very strange policies, unfortunately, and this has to be taken into account when submitting for publication.
I encourage authors to strongly consider making their work open access, either by self-archiving to the arXiv or publishing in one of the gold access journals. Ultimately, the arXiv is just an example of an e-print repository, but it seems to be the best choice for now. If, for example, a dedicated medical physics repository were created and critical mass were achieved, the papers on the arXiv could be stored there as well. I haven’t discussed the concerns some people have with open access (see the Wikipedia entry). If there is interest, I might talk about that in another post.

Journal policies
Below I will list the current (Jan. 2011) policies of the journals (as far as I can tell, UAYOR, YMMV, IANAL, etc). You can find out more information about the open access policies of these journals and others by using the SHERPA/RoMEO tool.

The state of open access in medical physics journals:
The state of open access in medical journals related to medical physics:

Wednesday, January 5, 2011

libdEdx version 1.0 released

Version 1.0 of libdEdx is now available at sourceforge.net

List of features:
  • ICRU 49 date tables for protons and Helium ions (PSTAR, ASTAR)
  • MSTAR for heavy ions
  • ICRU 73 with and without the erratum for water target
  • A Bethe implementation for any ion, including Linhard-Sørensen equation for low energies
  • Support for 278 ICRU target materials, i.e. the complete ESTAR material table, all with default I-values for the Bethe equation
  • I-values can be overridden for elements
  • Automatic application of Bragg's additivity rule, if requested target material does not exist in default table for e.g. MSTAR.
  • Detailed documentation, and multiple example files
  • CMAKE based installer, with uninstall target.
  • getdedx as a frontend command line program for querying the library
  • Two modes of operation: simple for lazy programmers and fast for e.g. MC codes.
  • GPL license (non-GPL versions available upon request)



How to use libdEdx, simplest possible example.

Demonstration of command line program getdedx. 100 MeV protons using PSTAR on water:

Usage: getdedx program_id  Z icru_target_id energy.

bassler@kepler:~$ getdedx 2 1 276 100
100.000000 MeV/u HYDROGEN ions on WATER target using PSTAR
dEdx = 7.286E+00 MeV cm2/g

Carbon ions on alanine target, using ICRU 73:

bassler@kepler: getdedx 5 6 105 400
400.000000 MeV/u CARBON ions on ALANINE target using ICRU73
 Bragg's additivity rule was applied,
 since compound ALANINE is not in ICRU73 data table.
dEdx = 1.068E+02 MeV cm2/g

For reporting bugs and feature request, you can use our trac ticket system or drop us an email.