I see, thanks for the clarification about the copy-on-write mechanism! I am aware that the modules loaded are not actually shared, but torch vars can in fact be shared through shared memory (as stated here, if you'd like to have a look) but that's as far as I ever went unfortunately.
Also, as far as I know, as the tensor storage is shared somehow in the C++ back-end of torch, I need to keep the python var around so that I maintain the sort-of pointer to the actual storage. I may very well got it wrong to be honest, but this is what I understood from the docs.
Well, I guess there are two main problems then: this does not work on Windows, as there is no copy-on-write (in fact I printed out the PIDs, and every worker had its own vars). Not a big deal, but I'd like to keep both environments working.
Second issue I encountered (but I guess I can solve it with an env variable), is that whenever I import the tasks somewhere else to call send on them, the same modules are re-imported in the calling script.
In your opinion, is there a way to avoid this, or there is no chance to make it work without customizing Dramatiq?
Anyway, thanks again for the huge help here, I'm just now starting to have a grasp of the beauty of this library, it is really a pleasure to read its code.(provided I understand what's happening :) )