I have writen a sample code to test split
op based on the develop branch:
import paddle.fluid as fluid
input = fluid.layers.data(shape=[-1, 4], dtype="float32")
x0, x1 = fluid.layers.split(input, num_or_sections=2, dim=0)
print x0.shape
print x1.shape
but no error message found. Which version of paddle do you use?
I have writen a sample code to test
split
op based on the develop branch:import paddle.fluid as fluid input = fluid.layers.data(shape=[-1, 4], dtype="float32") x0, x1 = fluid.layers.split(input, num_or_sections=2, dim=0) print x0.shape print x1.shape
but no error message found. Which version of paddle do you use?
you should test with num_or_sections=[2, 2]
import paddle.fluid as fluid
input = fluid.layers.data(shape=[-1, 4], dtype="float32", name='data')
x0, x1 = fluid.layers.split(input, num_or_sections=[2, 2], dim=0)
print x0.shape
print x1.shape
reference: https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/split_cn.html#split
if your batch size is fixed as 4=2+2, you would better fix the batch size in data
reference: https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/split_cn.html#split
if your batch size is fixed as 4=2+2, you would better fix the batch size indata
i use pyreader to build network. if i fix the batch size in the pyreader, it will produce tensor with 5 dimension. Because it will append -1 to the shapes.
test code:
import numpy as np
import paddle
import paddle.fluid as fluid
queue_capacity = 64
batch_size = -1
#batch_size = 8
image_shape = [3, 224, 224]
py_reader = fluid.layers.py_reader(
capacity=queue_capacity,
shapes=[[batch_size] + image_shape, [batch_size, 1]],
lod_levels=[0, 0],
dtypes=["float32", "int64"],
use_double_buffer=True)
image, label = fluid.layers.read_file(py_reader)
conv = fluid.layers.conv2d(image, 16, 3, 2, 1)
output, _ = fluid.layers.split(conv, num_or_sections=[4, 4], dim=0)
print output.shape
fluid.layers.Print(output)
def reader():
for _ in xrange(100):
image = np.random.rand(3, 224, 224)
label = 0
yield (image, label)
train_reader = paddle.batch(reader, batch_size=8, drop_last=True)
py_reader.decorate_paddle_reader(train_reader)
py_reader.start()
fetch_list = [output.name]
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
res = exe.run(
fluid.default_main_program(),
fetch_list=fetch_list
)
print res[0].shape