Dynamically Append N-dimensional Array
Solution 1:
Assuming all your files produce the same shape objects and you want to join them on the 1st dimension, there are several options:
alist = []
for f in files:
data = foo(f)
alist.append(f)
arr = np.concatenate(alist, axis=0)
concatenate
takes a list. There are variations if you want to add a new axis (np.array(alist)
, np.stack
etc).
Append to a list is fast, since it just means adding a pointer to the data
object. concatenate
creates a new array from the components; it's compiled but still relatively slower.
If you must/want to make a new array at each stage you could write:
arr = function(files[0])
for f in files[1:]:
data = foo(f)
arr = np.concatenate((arr, data), axis=0)
This probably is slower, though, if the file loading step is slow enough you might not notice a difference.
With care you might be able start with arr = np.zeros((0,2,100))
and read all files in the loop. You have to make sure the initial 'empty' array has a compatible shape. New users often have problems with this.
Solution 2:
If you absolutely want to do it during iteration then:
def arrayappend():
con = None
for i, d in enumerate(files_list):
data =function(d)
con = data if i is 0else np.vstack([con, data])
This should stack it vertically.
Solution 3:
Very non pretty, but does it achieve what you want? It is way unoptimized.
def arrayappend():
for i in range(n):
data = read(file_i)
try:
con
con = np.concatenate((con, function(data)))
except NameError:
con = function(data)
return con
First loop will take the except branch, subsequent wont.
Post a Comment for "Dynamically Append N-dimensional Array"