一开始还能不断打点,最后停下来,cpu和nvidia-smi消耗都是零。开了一天还是这样。
无限期停在.............这里
I0831 11:54:57.074611 4988 Util.cpp:166] commandline: --use_gpu=True --rnn_use_batch=True --trainer_count=2
[INFO 2018-08-31 11:54:58,520 layers.py:2716] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2018-08-31 11:54:58,521 layers.py:3361] output for __batch_norm_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2018-08-31 11:54:58,522 layers.py:7533] output for __scale_sub_region_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2018-08-31 11:54:58,522 layers.py:2716] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2018-08-31 11:54:58,522 layers.py:3361] output for __batch_norm_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2018-08-31 11:54:58,523 layers.py:7533] output for __scale_sub_region_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2018-08-31 11:55:02,229 model.py:243] begin to initialize the external scorer for decoding
[INFO 2018-08-31 11:55:02,317 model.py:253] language model: is_character_based = 1, max_order = 5, dict_size = 0
[INFO 2018-08-31 11:55:02,317 model.py:254] end initializing scorer
[INFO 2018-08-31 11:55:02,317 tuneaishell.py:119] start tuning ...
----------- Configuration Arguments -----------
alpha_from: 1.0
alpha_to: 3.2
batch_size: 16
beam_size: 500
beta_from: 0.1
beta_to: 0.45
cutoff_prob: 1.0
cutoff_top_n: 40
error_rate_type: wer
lang_model_path: models/lm/zh_giga.no_cna_cmn.prune01244.klm
mean_std_path: data/aishell/mean_std.npz
model_path: ./interval-passes/params-38+12_4augments.tar.gz
num_alphas: 45
num_batches: 1
num_betas: 8
num_conv_layers: 2
num_proc_bsearch: 8
num_proc_data: 4
num_rnn_layers: 3
rnn_layer_size: 1024
share_rnn_weights: False
specgram_type: linear
trainer_count: 2
tune_manifest: data/aishell/manifest.dev
use_gpu: True
use_gru: True
vocab_path: data/aishell/vocab.txt
------------------------------------------------
I0831 11:55:02.656805 4988 MultiGradientMachine.cpp:99] numLogicalDevices=1 numThreads=2 numDevices=2
....................................................................................................................................................................................
可以先把grid search的网格取稀疏一点,快速跑完一个batch的tune试试
是说把num_alphas和num_betas取得小一点吗?
是的
请问你知不知道DeepASR有没有tune这个过程?
DeepASR没有这个过程,目前做得还不成熟,只是一个demo性质的东西
DeepASR除了是基于fluid,其他方面,比如识别率,和现在的deepspeech有区别吗?谢谢