Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support for Random123 routines in verbatim blocks for GPU #130

Open
pramodk opened this issue Mar 31, 2019 · 1 comment
Open

Better support for Random123 routines in verbatim blocks for GPU #130

pramodk opened this issue Mar 31, 2019 · 1 comment
Labels
codegen Code generation backend gpu refactoring Code refactoring
Milestone

Comments

@pramodk
Copy link
Contributor

pramodk commented Mar 31, 2019

Consider following example in Ampa and Gaba synapses of BBP model :

VERBATIM
static void bbcore_read(double* x, int* d, int* xx, int* offset, _threadargsproto_) {
  assert(!_p_rng);
  uint32_t* di = ((uint32_t*)d) + *offset;
  if (di[0] != 0 || di[1] != 0 || di[2] != 0) {
      nrnran123_State** pv = (nrnran123_State**)(&_p_rng);
      *pv = nrnran123_newstream3(di[0], di[1], di[2]);
      char which = (char)di[4];
      nrnran123_setseq(*pv, di[3], which);
  }
  *offset += 5;
}
ENDVERBATIM

INITIAL {
        LOCAL tp_AMPA, tp_NMDA
        tp_AMPA = (tau_r_AMPA*tau_d_AMPA)/(tau_d_AMPA-tau_r_AMPA)*log(tau_d_AMPA/tau_r_AMPA)
       ....

        VERBATIM
        if( usingR123 ) {
            nrnran123_setseq((nrnran123_State*)_p_rng, 0, 0);
        }
        ENDVERBATIM
}

As these mod files are compiled for GPU, nrnran123_setseq get replaced with cu_nrnran123_setseq (because of macro in nrnran123.h).

nrnran123_setseq in INITIAL block is called on GPU so its ok to call cu_nrnran123_setseq. But bbcore_read is called in CPU context and calling CUDA kernel cu_nrnran123_setseq result in abort.

@pramodk pramodk added codegen Code generation backend refactoring Code refactoring gpu labels Mar 31, 2019
@ohm314 ohm314 added this to the v0.3 milestone Apr 23, 2019
@alkino
Copy link
Member

alkino commented Oct 22, 2024

I think this is fixed by #1125 and no more writing directly random functions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
codegen Code generation backend gpu refactoring Code refactoring
Projects
None yet
Development

No branches or pull requests

3 participants