Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Absorption is very high when voxel size is lower than 1mm #139

Closed
Edouard2laire opened this issue Feb 15, 2022 · 8 comments
Closed

Absorption is very high when voxel size is lower than 1mm #139

Edouard2laire opened this issue Feb 15, 2022 · 8 comments

Comments

@Edouard2laire
Copy link

Edouard2laire commented Feb 15, 2022

Hello @fangq

I am contacting you since I am having an issue simulating fluences with an MRI which has a 0.8mm voxel size. Fron the command output it seems that all the photons are absorbed. However, the simulation seems to run fine if i first resample the MRI and the segmentation tissue to 1mm voxel size. is that an expected behavior?

GPU=1 (GeForce GTX 1050) threadph=976 extra=5760 np=10000000 nthread=10240 maxgate=1 repetition=1
initializing streams ... init complete : 14 ms
requesting 1024 bytes of shared memory
launching MCX simulation for time window [0.00e+00ns 5.00e+00ns] ...
simulation run# 1 ...
kernel complete: 110896 ms
retrieving fields ... transfer complete: 111008 ms
normalizing raw data ... source 1, normalization factor alpha=31.250000
data normalization complete : 111532 ms
simulated 10000000 photons (10000000) with 10240 threads (repeat x1)
MCX simulation speed: 90.19 photon/ms
total simulated energy: 10000000.00 absorbed: 99.97427%
(loss due to initial specular reflection is excluded in the total)

here is the output if i resample the MRI: (computation seems also lot faster)

GPU=1 (GeForce GTX 1050) threadph=976 extra=5760 np=10000000 nthread=10240 maxgate=1 repetition=1
initializing streams ... init complete : 14 ms
requesting 1024 bytes of shared memory
launching MCX simulation for time window [0.00e+00ns 5.00e+00ns] ...
simulation run# 1 ...
kernel complete: 14371 ms
retrieving fields ... transfer complete: 14483 ms
data normalization complete : 14643 ms
normalizing raw data ... source 1, normalization factor alpha=20.000000
simulated 10000000 photons (10000000) with 10240 threads (repeat x1)
MCX simulation speed: 696.77 photon/ms
total simulated energy: 10000000.00 absorbed: 48.80228%

Regards,
Edouard

@fangq
Copy link
Owner

fangq commented Feb 18, 2022

@Edouard2laire, can you post your input file with a link (dropbox, google drive, etc) and I can take a look.

@Edouard2laire
Copy link
Author

Hello,
Can I send you the link by email (it's sensitive data)? Which file should I send you? T1 + the segmentation ? or the mcxlab configuration file.

Edouard

@fangq
Copy link
Owner

fangq commented Feb 21, 2022

of course. mcxlab script/cfg is preferred

@fangq
Copy link
Owner

fangq commented Mar 13, 2022

@Edouard2laire, sorry, I might have missed, did you send a test dataset?

@Edouard2laire
Copy link
Author

Edouard2laire commented May 11, 2022

Hello,
Thanks for your patience.

I tried to generate the json file as asked but it doesnt seems to be wroking.

I added ''' mcx2json(cfg,'config_orig.json') ''' here https://github.com/Nirstorm/nirstorm/blob/master/bst_plugin/forward/process_nst_cpt_fluences.m#L321

but when loading with json2mcx i get the following error :

Brace indexing is not supported for variables of this type.
Error in cell2mat (line 42)
cellclass = class(c{1});
Error in json2mcx (line 65)
cfg.prop=squeeze(cell2mat(struct2cell(cell2mat(json.Domain.Media))))'; 

The output of mcx2json can be found here: https://drive.google.com/file/d/1yAyPFuyvikt96n2qQk4_Qfthrsg3P0Kr/view?usp=sharing

I have put the .mat file here: https://drive.google.com/file/d/1G4PFaJMZ9HUHF6Bi31y8mSWl9PvKd_RP/view?usp=sharing

load('/NAS/home/edelaire/Desktop/test_GH_fluences/config_orig.mat', 'cfg')
fluenceRate = mcxlab(cfg); 

is able to reproduce the bug

@Edouard2laire
Copy link
Author

ok. i think that i found the origin of the bug. it might have nothing to do with MCXlab.
Here is the output of mcxlab(cfg,'preview'); with the resampled MRI:

MRI_resampled

and the output with the original MRI:
MRI_orig

The source is inside the brain and that's why i guess that it doesnt work. However, if i display the position/normal with NIRSTORM, i get this :
Screenshot from 2022-05-11 14-11-50

So i don't know what can be causing this issue

@Edouard2laire
Copy link
Author

Edouard2laire commented May 14, 2022

So my thought was that the error was due to some coordinate issue in Brainstorm (see brainstorm-tools/brainstorm3#543). However, it doesn't seems to be the case.

One thing interesting i realized is if i divide the source position by the voxel size (here: https://github.com/Nirstorm/nirstorm/blob/master/bst_plugin/forward/process_nst_cpt_fluences.m#L309) then mcxlab(cfg,'preview'); display the source correctly on the head;
Screenshot from 2022-05-14 14-57-06

however, i then run in memory issue when lunching mcxlab :

Running simulations for configuration #1 ...
mcx.gpuid=1;
mcx.autopilot=1;
mcx.respin=1;
mcx.seed=1648335518;
mcx.nphoton=1e+07;
mcx.dim=[320 320 320];
mcx.mediabyte=4;
mcx.unitinmm=0.8;
mcx.isreflect=1;
mcx.isrefint=1;
mcx.tstart=0;
mcx.tend=5e-09;
mcx.tstep=5e-09;
mcx.isnormalized=1;
mcx.issrcfrom0=0;
mcx.srcpos=[206.25 228.75 221.25];
mcx.srcdir=[-0.464129 -0.626464 -0.6262 0];
mcx.medianum=6;
MCXLAB ERROR -2 in unit mcx_core.cu:1963: out of memory
Error from thread (0): out of memory
C++ Error: MCXLAB Terminated due to an exception!

Edit: if i use a more recent GPU GeForce GTX 1080 Ti, then it run without any issue. Is that expected that lower voxel size require GPU with more memory?

Just to clarify, srcpos should be the coordinate in mm or in voxel ?

Also, it seems that the fluence estimation is much more smooth spatially when estimating from the 1mm resolution MRI than the 0.8mm MRI. Is that expected ? It causes issue in nirsotrm when we try to normalize the fluence by the intensity of the fluence at the entry point (eg https://github.com/Nirstorm/nirstorm/blob/master/bst_plugin/forward/process_nst_import_head_model.m#L305-L326 ) as the voxel corresponding to the source position contains 0

Edit: just saw in the example scripts that it is expected. Increased the number of photons and got good results. We can close now :)

@fangq
Copy link
Owner

fangq commented May 16, 2022

@Edouard2laire, sorry I did not read your updated post carefully enough - for your question regarding the unit of srcpos - it is in the voxel unit, same for detpos/srcparam etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants