-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reducing partiticles from set of ID's #21
Comments
After offline disscussion with @KseniaBastrakova @prometeusPi, i using this approach:
Now i submitted a jobs as mentioned in #22 .
|
I guess i should take into account to update some other attributes / dataset as well, when changing the particles. Could it be some patches? |
yes, I think, you missed "unit_si" or "macroweighted" attribute |
This is the path to the file on taurus |
Thanks @alex-koe . |
@sbastrakov the file is now on |
Thank you for providing the file I think, i understand your problem:
and output is :
and output is:
numParticlesOffset` has to be in range [0: particle_species_size]. It indicates start of new particles chunk. So, i can suggest two solutions:
|
So what may have gone wrong is that when removing the particles except the IDs you wanted to keep, the patch information was not updated correspondingly @alex-koe . Not 100% sure, just seems plausible. I think the easiest is to just remove the patches datasets alltogether then? (They are optional, and the reductor works when they are not present, just does not work when they are present but wrong) |
@sbastrakov I agree, the patch information was not updated. Just removing the patches is the easiest approach. |
As i thought no, i think @KseniaBastrakova will take a look |
@alex-koe i think, i reproduce your problem and fix it. |
@KseniaBastrakova It is working! Great work. |
@sbastrakov @KseniaBastrakova I have only a minor question. When i do read out the position dataset, it looks the same for original and reduced particles and he data set is reduced in size. The weights seems to be ok, i.e., the sum of the weights give the same charge. The average position and spread in position is consistent between original and reduced set. However, i do not understand the momentum: This is the reduced set: For reading the momentum, i do the following: """ label is one of {x,y,z}, species is *en*, ..."""
u = f['/data/{}/particles/{}/momentum/{}'.format(timestep, species, label)] # ~ gamma beta m c
w = f['/data/{}/particles/{}/weighting'.format(timestep, species)] #
u_unitSI = u.attrs["unitSI"]
uy = u_unitSI*np.array(u) /np.array(w) /(const.m_e*const.c) ## the normalized momenum gamma beta_i Do i have to take special care here to get the correct momentum? |
Thanks for reporting. It could be that somewhere (not sure on whose side) the macroweighted flag is treated incorrectly. |
To extend on that thought: @alex-koe your way of computing momentum is not suitable for general openPMD files, as the momentum attribute could theoretically be for both single- and macroparticle, and there is a special attribute that informs on what way was used. However, I assume you know how it is for your particular file (that momentum is for macroparticles, like in PIConGPU output) and read it accordingly. So I think it is more likely there is something fishy in the reductor @KseniaBastrakova , e.g. maybe the reductor writes it out with the different |
@sbastrakov so what would be then the correct way? I used pretty used the same way that did give me the correct momentum for all simulations as before and never had an issue. Is the 'reduced' file somehow in a different format? |
@alex-koe thank you for details. I think, the problem on my side: i found a bug, that can couse this behavior. I will fix it today. |
@alex-koe , now it's work better. Could you make |
@KseniaBastrakova I tried it again. It is now different :-D |
@alex-koe . So, how dou you bild graph and which simulation do you use? Maybe, it will help me, case i cant reproduce
first, simData_beta_60000 before reduction and after reduction with ratio of deleted particles = 0.1 So, graphs look similar, so i can't reproduce problem |
I put the submit script, the jupyter notebook i used for the plots shown and the sim file for in and out to the tar archive |
I don't think a large ratio should not work. It would certainly make the sampling much worse, but if that's acceptable otherwise should not be an issue. |
@alex-koe , could you give me access to the |
@KseniaBastrakova you have now right access to the file. |
@alex-koe, Thank you for scripts and sample. It's really useful The problem that momentums are "macroweighted" that means, that we store total momentum for all electrons in microparticle. So, if you delete 0.9, so the start number of macroparticles is So, i have suggision for you: if you expect, that all macroparicles are not macroweghted, i can add flag "-ignore macroweighted" flag to reductor |
I believe this should be not done as part of the reduction. As then it's really easy to use it wrongly. |
I can make additional script to fix it, as a temporary solution |
Sorry, i maybe not correctly understand: It is not possible to redistribute the weighting after the reduction process? I would like to have the total physical quantities, ie. energy, momentum, charge, mass, conserved. Such as, that if i have in the original file 100pC of charge and the average momentum is 100MeV/c, then the reduced file has the same charge and average momentum but just the number of macro particle is reduced. |
@alex-koe let me reiterate. Of couse, it is possible to remove some macroparticles and redistribute their weights among others so that the physical conservation laws are in place, and that's the goal of reduction. The point @KseniaBastrakova was trying to make was merely that in your file, the momentums were stored per macroparticle and not per physical particle. The openPMD standard has a special property for more or less each piece of particle data that says whether it's for a real- or a macro-particle, this propery is called |
Sorry, @KseniaBastrakova is there an issue with the reduction script or with the input data? |
I am also not sure what the current status is. To reiterate my previous point, this reduction package does in principle support both macro-weighted and non macro-weighted data. So if it produces incorrect results, this is a bug in implementation and we should investigate and fix it, but it is not a principal limitation. If I understood @KseniaBastrakova 's last point correctly, she did not see an obvious error in the last run, but I guess @alex-koe did. Maybe it is better to talk offline |
Yes, I think it makes sense to talk offline. We defenetly have missunderstanding here |
Okay - then let's meet - today is quite busy for me, how about tomorrow at 11 AM? |
Works fine with me |
For me too |
Just spoke with @alex-koe offline - a meeting tomorrow at 11 AM works for him as well. |
@KseniaBastrakova Great work! 👍 the momenta look correct. Could you please do a |
@PrometheusPi here are graphs (coordinate, momentum) for each dimension before and after reduction. (for file |
@KseniaBastrakova Thanks - could please use the |
@PrometheusPi , i added weights : |
@KseniaBastrakova Thank you, that looks really good! d^2Q/dydp_y looks perfect (y \in {x,y,z}). |
@alex-koe Can you please try this with your data. |
@PrometheusPi Here is a comparision between original (left col.) and reducted particle sets (right col.): |
For documention reasons, i copy the script for the particle filtering: """
needed Input:
+ timestep
+ filename_bunch_idendifiers
+ filename_filtered_file
"""
###### init ####
import numpy as np
import scipy as sp
import h5py
import scipy.constants as const
timestep = 60000
filename_bunch_idendifiers = "bunch-identifiers.dat"
################ WARING ################
# The filtered file will be overwritten, copy this file instead from the original data
filename_filtered_file = "simData_run006_filtered_{}.h5".format(timestep)
# read particle ids for filtering
ids = np.loadtxt(filename_bunch_idendifiers, dtype=np.uint64)
##### open h5 files
filtered_file = h5py.File(filename_filtered_file, "r+")
h = filtered_file["/data/{}/particles/en_all".format(timestep)]
current_ids = h["particleId"][()]
m = np.in1d(current_ids, ids)
if m.sum() != len(ids):
print("ERR: requested IDs are not fully contained in H5 file. Abort.")
exit
paths = ["particleId", "weighting",
"momentum/x", "momentum/y", "momentum/z",
"momentumPrev1/x", "momentumPrev1/y", "momentumPrev1/z",
"position/x", "position/y", "position/z",
"positionOffset/x", "positionOffset/y", "positionOffset/z"]
for p in paths:
temp = h[p][m]
temp_items = h[p].attrs.items()
h.__delitem__(p)
h[p] = temp
for i in temp_items:
h[p].attrs.create(name=i[0], data=i[1])
for p in ["mass", 'charge']:
temp = h[p].attrs['shape']
temp[0] = np.sum(m)
h[p].attrs['shape'] = temp
#### delete particle patches because reduction script does not process them correctly
filtered_file["/data/{}/particles/en_all/".format(timestep)].__delitem__('particlePatches')
filtered_file.close() |
@KseniaBastrakova i think this issue can be closed. I just do not know how to do ... |
Now, the library supports only the selection of particle species for reduction. There was a request to add functionality that will reduce the number of particles only from the selected set.
As a small feature, it can be additional script, that the script takes as an argument file with a set of particle indexes (in .dat format) with indexes of particles to which the reduction should be applied. The algorithm only applies to particles from the openPMDfile with the selected indices
The text was updated successfully, but these errors were encountered: