锐英源软件
第一信赖

精通

英语

开源

擅长

开发

培训

胸怀四海 

第一信赖

当前位置:锐英源 / 开源技术 / 语音识别开源 / kaldi的nnet3模型训练日志
服务方向
人工智能数据处理
人工智能培训
kaldi数据准备
小语种语音识别
语音识别标注
语音识别系统
语音识别转文字
kaldi开发技术服务
软件开发
运动控制卡上位机
机械加工软件
软件开发培训
Java 安卓移动开发
VC++
C#软件
汇编和破解
驱动开发
联系方式
固话:0371-63888850
手机:138-0381-0136
Q Q:396806883
微信:ryysoft

kaldi的nnet3模型训练日志

(base) server@server-PowerEdge-R740:~/kaldi-trunk/egs/wsj/s5$ ./run.sh
state 4*************
stage 5***************
stage 6****************
****************7 run_mmi_tri4b 4b****************
****************8 run_nnet2****************
local/nnet3/run_ivector_common.sh: preparing directory for low-resolution speed-perturbed data (for alignment)
utils/data/perturb_data_dir_speed_3way.sh: making sure the utt2dur and the reco2dur files are present
... in data/train_si284, because obtaining it after speed-perturbing
... would be very slow, and you might need them.
utils/data/get_utt2dur.sh: data/train_si284/utt2dur already exists with the expected length.  We won't recompute it.
utils/data/get_reco2dur.sh: data/train_si284/reco2dur already exists with the expected length.  We won't recompute it.
utils/data/perturb_data_dir_speed.sh: generated speed-perturbed version of data in data/train_si284, in data/train_si284_sp_speed0.9
utils/validate_data_dir.sh: Successfully validated data-directory data/train_si284_sp_speed0.9
utils/data/perturb_data_dir_speed.sh: generated speed-perturbed version of data in data/train_si284, in data/train_si284_sp_speed1.1
utils/validate_data_dir.sh: Successfully validated data-directory data/train_si284_sp_speed1.1
utils/data/combine_data.sh data/train_si284_sp data/train_si284 data/train_si284_sp_speed0.9 data/train_si284_sp_speed1.1
utils/data/combine_data.sh: combined utt2uniq
utils/data/combine_data.sh [info]: not combining segments as it does not exist
utils/data/combine_data.sh: combined utt2spk
utils/data/combine_data.sh [info]: not combining utt2lang as it does not exist
utils/data/combine_data.sh: combined utt2dur
utils/data/combine_data.sh [info]: **not combining utt2num_frames as it does not exist everywhere**
utils/data/combine_data.sh: combined reco2dur
utils/data/combine_data.sh [info]: **not combining feats.scp as it does not exist everywhere**
utils/data/combine_data.sh: combined text
utils/data/combine_data.sh [info]: **not combining cmvn.scp as it does not exist everywhere**
utils/data/combine_data.sh [info]: not combining vad.scp as it does not exist
utils/data/combine_data.sh [info]: not combining reco2file_and_channel as it does not exist
utils/data/combine_data.sh: combined wav.scp
utils/data/combine_data.sh [info]: not combining spk2gender as it does not exist
fix_data_dir.sh: kept all 18912 utterances.
fix_data_dir.sh: old files are kept in data/train_si284_sp/.backup
utils/data/perturb_data_dir_speed_3way.sh: generated 3-way speed-perturbed version of data in data/train_si284, in data/train_si284_sp
utils/validate_data_dir.sh: Successfully validated data-directory data/train_si284_sp
local/nnet3/run_ivector_common.sh: making MFCC features for low-resolution speed-perturbed data (needed for alignments)
steps/make_mfcc.sh --nj 30 --cmd run.pl data/train_si284_sp
utils/validate_data_dir.sh: Successfully validated data-directory data/train_si284_sp
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
steps/make_mfcc.sh: Succeeded creating MFCC features for train_si284_sp
steps/compute_cmvn_stats.sh data/train_si284_sp
Succeeded creating CMVN stats for train_si284_sp
local/nnet3/run_ivector_common.sh: fixing input data-dir to remove nonexistent features, in case some
.. speed-perturbed segments were too short.
fix_data_dir.sh: kept all 18912 utterances.
fix_data_dir.sh: old files are kept in data/train_si284_sp/.backup
local/nnet3/run_ivector_common.sh: aligning with the perturbed low-resolution data
steps/align_fmllr.sh --nj 30 --cmd run.pl data/train_si284_sp data/lang exp/tri4b exp/tri4b_ali_train_si284_sp
steps/align_fmllr.sh: feature type is lda
steps/align_fmllr.sh: compiling training graphs
steps/align_fmllr.sh: aligning data in data/train_si284_sp using exp/tri4b/final.alimdl and speaker-independent features.
steps/align_fmllr.sh: computing fMLLR transforms
steps/align_fmllr.sh: doing final alignment.
steps/align_fmllr.sh: done aligning data.
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri4b_ali_train_si284_sp
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri4b_ali_train_si284_sp/log/analyze_alignments.log
525 warnings in exp/tri4b_ali_train_si284_sp/log/align_pass1.*.log
201 warnings in exp/tri4b_ali_train_si284_sp/log/fmllr.*.log
528 warnings in exp/tri4b_ali_train_si284_sp/log/align_pass2.*.log
local/nnet3/run_ivector_common.sh: creating high-resolution MFCC features
utils/copy_data_dir.sh: copied data from data/train_si284_sp to data/train_si284_sp_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/train_si284_sp_hires
utils/copy_data_dir.sh: copied data from data/test_dev93 to data/test_dev93_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/test_dev93_hires
utils/copy_data_dir.sh: copied data from data/test_eval92 to data/test_eval92_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/test_eval92_hires
utils/data/perturb_data_dir_volume.sh: data/train_si284_sp_hires/feats.scp exists; moving it to data/train_si284_sp_hires/.backup/ as it wouldn't be valid any more.
utils/data/perturb_data_dir_volume.sh: added volume perturbation to the data in data/train_si284_sp_hires
steps/make_mfcc.sh --nj 30 --mfcc-config conf/mfcc_hires.conf --cmd run.pl data/train_si284_sp_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/train_si284_sp_hires
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
steps/make_mfcc.sh: Succeeded creating MFCC features for train_si284_sp_hires
steps/compute_cmvn_stats.sh data/train_si284_sp_hires
Succeeded creating CMVN stats for train_si284_sp_hires
fix_data_dir.sh: kept all 18912 utterances.
fix_data_dir.sh: old files are kept in data/train_si284_sp_hires/.backup
steps/make_mfcc.sh --nj 30 --mfcc-config conf/mfcc_hires.conf --cmd run.pl data/test_dev93_hires
steps/make_mfcc.sh: moving data/test_dev93_hires/feats.scp to data/test_dev93_hires/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/test_dev93_hires
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
steps/make_mfcc.sh: Succeeded creating MFCC features for test_dev93_hires
steps/compute_cmvn_stats.sh data/test_dev93_hires
Succeeded creating CMVN stats for test_dev93_hires
fix_data_dir.sh: kept all 6304 utterances.
fix_data_dir.sh: old files are kept in data/test_dev93_hires/.backup
steps/make_mfcc.sh --nj 30 --mfcc-config conf/mfcc_hires.conf --cmd run.pl data/test_eval92_hires
steps/make_mfcc.sh: moving data/test_eval92_hires/feats.scp to data/test_eval92_hires/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/test_eval92_hires
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
steps/make_mfcc.sh: Succeeded creating MFCC features for test_eval92_hires
steps/compute_cmvn_stats.sh data/test_eval92_hires
Succeeded creating CMVN stats for test_eval92_hires
fix_data_dir.sh: kept all 6304 utterances.
fix_data_dir.sh: old files are kept in data/test_eval92_hires/.backup
local/nnet3/run_ivector_common.sh: computing a subset of data to train the diagonal UBM.
utils/data/subset_data_dir.sh: reducing #utt from 18912 to 4728
local/nnet3/run_ivector_common.sh: computing a PCA transform from the hires data.
steps/online/nnet2/get_pca_transform.sh --cmd run.pl --splice-opts --left-context=3 --right-context=3 --max-utts 10000 --subsample 2 exp/nnet3/diag_ubm/train_si284_sp_hires_subset exp/nnet3/pca_transform
Done estimating PCA transform in exp/nnet3/pca_transform
local/nnet3/run_ivector_common.sh: training the diagonal UBM.
steps/online/nnet2/train_diag_ubm.sh --cmd run.pl --nj 30 --num-frames 700000 --num-threads 32 exp/nnet3/diag_ubm/train_si284_sp_hires_subset 512 exp/nnet3/pca_transform exp/nnet3/diag_ubm
steps/online/nnet2/train_diag_ubm.sh: Directory exp/nnet3/diag_ubm already exists. Backing up diagonal UBM in exp/nnet3/diag_ubm/backup.i1B
steps/online/nnet2/train_diag_ubm.sh: initializing model from E-M in memory,
steps/online/nnet2/train_diag_ubm.sh: starting from 256 Gaussians, reaching 512;
steps/online/nnet2/train_diag_ubm.sh: for 20 iterations, using at most 700000 frames of data
Getting Gaussian-selection info
steps/online/nnet2/train_diag_ubm.sh: will train for 4 iterations, in parallel over
steps/online/nnet2/train_diag_ubm.sh: 30 machines, parallelized with 'run.pl'
steps/online/nnet2/train_diag_ubm.sh: Training pass 0
steps/online/nnet2/train_diag_ubm.sh: Training pass 1
steps/online/nnet2/train_diag_ubm.sh: Training pass 2
steps/online/nnet2/train_diag_ubm.sh: Training pass 3
local/nnet3/run_ivector_common.sh: training the iVector extractor
steps/online/nnet2/train_ivector_extractor.sh --cmd run.pl --online-cmvn-iextractor false --nj 10 --num-threads 4 --num-processes 4 data/train_si284_sp_hires exp/nnet3/diag_ubm exp/nnet3/extractor
steps/online/nnet2/train_ivector_extractor.sh: Directory exp/nnet3/extractor already exists. Backing up iVector extractor in exp/nnet3/extractor/backup.Sd1
steps/online/nnet2/train_ivector_extractor.sh: doing Gaussian selection and posterior computation
Accumulating stats (pass 0)
Summing accs (pass 0)
Updating model (pass 0)
Accumulating stats (pass 1)
Summing accs (pass 1)
Updating model (pass 1)
Accumulating stats (pass 2)
Summing accs (pass 2)
Updating model (pass 2)
Accumulating stats (pass 3)
Summing accs (pass 3)
Updating model (pass 3)
Accumulating stats (pass 4)
Summing accs (pass 4)
Updating model (pass 4)
Accumulating stats (pass 5)
Summing accs (pass 5)
Updating model (pass 5)
Accumulating stats (pass 6)
Summing accs (pass 6)
Updating model (pass 6)
Accumulating stats (pass 7)
Summing accs (pass 7)
Updating model (pass 7)
Accumulating stats (pass 8)
Summing accs (pass 8)
Updating model (pass 8)
Accumulating stats (pass 9)
Summing accs (pass 9)
Updating model (pass 9)
utils/data/modify_speaker_info.sh: copied data from data/train_si284_sp_hires to exp/nnet3/ivectors_train_si284_sp_hires/train_si284_sp_hires_max2, number of speakers changed from 195 to 9495
utils/validate_data_dir.sh: Successfully validated data-directory exp/nnet3/ivectors_train_si284_sp_hires/train_si284_sp_hires_max2
steps/online/nnet2/extract_ivectors_online.sh --cmd run.pl --nj 30 exp/nnet3/ivectors_train_si284_sp_hires/train_si284_sp_hires_max2 exp/nnet3/extractor exp/nnet3/ivectors_train_si284_sp_hires
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/nnet3/ivectors_train_si284_sp_hires using the extractor in exp/nnet3/extractor.
steps/online/nnet2/extract_ivectors_online.sh --cmd run.pl --nj 65 data/test_dev93_hires exp/nnet3/extractor exp/nnet3/ivectors_test_dev93_hires
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/nnet3/ivectors_test_dev93_hires using the extractor in exp/nnet3/extractor.
steps/online/nnet2/extract_ivectors_online.sh --cmd run.pl --nj 65 data/test_eval92_hires exp/nnet3/extractor exp/nnet3/ivectors_test_eval92_hires
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/nnet3/ivectors_test_eval92_hires using the extractor in exp/nnet3/extractor.
local/nnet3/run_tdnn.sh: creating neural net configs using the xconfig parser
tree-info exp/tri4b/tree
steps/nnet3/xconfig_to_configs.py --xconfig-file exp/nnet3/tdnn1a_sp/configs/network.xconfig --config-dir exp/nnet3/tdnn1a_sp/configs/
nnet3-init exp/nnet3/tdnn1a_sp/configs//init.config exp/nnet3/tdnn1a_sp/configs//init.raw
LOG (nnet3-init[5.5.839~8-0c6a]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/nnet3/tdnn1a_sp/configs//init.raw
nnet3-info exp/nnet3/tdnn1a_sp/configs//init.raw
nnet3-init exp/nnet3/tdnn1a_sp/configs//ref.config exp/nnet3/tdnn1a_sp/configs//ref.raw
LOG (nnet3-init[5.5.839~8-0c6a]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/nnet3/tdnn1a_sp/configs//ref.raw
nnet3-info exp/nnet3/tdnn1a_sp/configs//ref.raw
nnet3-init exp/nnet3/tdnn1a_sp/configs//ref.config exp/nnet3/tdnn1a_sp/configs//ref.raw
LOG (nnet3-init[5.5.839~8-0c6a]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/nnet3/tdnn1a_sp/configs//ref.raw
nnet3-info exp/nnet3/tdnn1a_sp/configs//ref.raw
2021-06-23 21:35:31,719 [steps/nnet3/train_dnn.py:36 - <module> - INFO ] Starting DNN trainer (train_dnn.py)
steps/nnet3/train_dnn.py --stage=-10 --cmd=run.pl --mem 32G --feat.online-ivector-dir=exp/nnet3/ivectors_train_si284_sp_hires --feat.cmvn-opts=--norm-means=false --norm-vars=false --trainer.srand=0 --trainer.max-param-change=2.0 --trainer.num-epochs=3 --trainer.samples-per-iter=400000 --trainer.optimization.num-jobs-initial=2 --trainer.optimization.num-jobs-final=2 --trainer.optimization.initial-effective-lrate=0.0015 --trainer.optimization.final-effective-lrate=0.00015 --trainer.optimization.minibatch-size=256,128 --egs.dir= --cleanup.remove-egs=true --use-gpu=true --feat-dir=data/train_si284_sp_hires --ali-dir=exp/tri4b_ali_train_si284_sp --lang=data/lang --reporting.email= --dir=exp/nnet3/tdnn1a_sp
['steps/nnet3/train_dnn.py', '--stage=-10', '--cmd=run.pl --mem 32G', '--feat.online-ivector-dir=exp/nnet3/ivectors_train_si284_sp_hires', '--feat.cmvn-opts=--norm-means=false --norm-vars=false', '--trainer.srand=0', '--trainer.max-param-change=2.0', '--trainer.num-epochs=3', '--trainer.samples-per-iter=400000', '--trainer.optimization.num-jobs-initial=2', '--trainer.optimization.num-jobs-final=2', '--trainer.optimization.initial-effective-lrate=0.0015', '--trainer.optimization.final-effective-lrate=0.00015', '--trainer.optimization.minibatch-size=256,128', '--egs.dir=', '--cleanup.remove-egs=true', '--use-gpu=true', '--feat-dir=data/train_si284_sp_hires', '--ali-dir=exp/tri4b_ali_train_si284_sp', '--lang=data/lang', '--reporting.email=', '--dir=exp/nnet3/tdnn1a_sp']
2021-06-23 21:35:31,726 [steps/nnet3/train_dnn.py:178 - train - INFO ] Arguments for the experiment
{'ali_dir': 'exp/tri4b_ali_train_si284_sp',
'backstitch_training_interval': 1,
'backstitch_training_scale': 0.0,
'cleanup': True,
'cmvn_opts': '--norm-means=false --norm-vars=false',
'combine_sum_to_one_penalty': 0.0,
'command': 'run.pl --mem 32G',
'compute_per_dim_accuracy': False,
'dir': 'exp/nnet3/tdnn1a_sp',
'do_final_combination': True,
'dropout_schedule': None,
'egs_command': None,
'egs_dir': None,
'egs_opts': None,
'egs_stage': 0,
'email': None,
'exit_stage': None,
'feat_dir': 'data/train_si284_sp_hires',
'final_effective_lrate': 0.00015,
'frames_per_eg': 8,
'initial_effective_lrate': 0.0015,
'input_model': None,
'lang': 'data/lang',
'max_lda_jobs': 10,
'max_models_combine': 20,
'max_objective_evaluations': 30,
'max_param_change': 2.0,
'minibatch_size': '256,128',
'momentum': 0.0,
'num_epochs': 3.0,
'num_jobs_compute_prior': 10,
'num_jobs_final': 2,
'num_jobs_initial': 2,
'num_jobs_step': 1,
'online_ivector_dir': 'exp/nnet3/ivectors_train_si284_sp_hires',
'preserve_model_interval': 100,
'presoftmax_prior_scale_power': -0.25,
'prior_subset_size': 20000,
'proportional_shrink': 0.0,
'rand_prune': 4.0,
'remove_egs': True,
'reporting_interval': 0.1,
'samples_per_iter': 400000,
'shuffle_buffer_size': 5000,
'srand': 0,
'stage': -10,
'train_opts': [],
'use_gpu': 'yes'}
2021-06-23 21:35:32,312 [steps/nnet3/train_dnn.py:228 - train - INFO ] Initializing a basic network for estimating preconditioning matrix
2021-06-23 21:35:32,415 [steps/nnet3/train_dnn.py:238 - train - INFO ] Generating egs
steps/nnet3/get_egs.sh --cmd run.pl --mem 32G --cmvn-opts --norm-means=false --norm-vars=false --online-ivector-dir exp/nnet3/ivectors_train_si284_sp_hires --left-context 13 --right-context 7 --left-context-initial -1 --right-context-final -1 --stage 0 --samples-per-iter 400000 --frames-per-eg 8 --srand 0 data/train_si284_sp_hires exp/tri4b_ali_train_si284_sp exp/nnet3/tdnn1a_sp/egs
File data/train_si284_sp_hires/utt2uniq exists, so augmenting valid_uttlist to
include all perturbed versions of the same 'real' utterances.
steps/nnet3/get_egs.sh: creating egs.  To ensure they are not deleted later you can do:  touch exp/nnet3/tdnn1a_sp/egs/.nodelete
steps/nnet3/get_egs.sh: feature type is raw, with 'apply-cmvn'
feat-to-dim scp:exp/nnet3/ivectors_train_si284_sp_hires/ivector_online.scp -
steps/nnet3/get_egs.sh: working out number of frames of training data
steps/nnet3/get_egs.sh: working out feature dim
steps/nnet3/get_egs.sh: creating 3 archives, each with 300234 egs, with
steps/nnet3/get_egs.sh:   8 labels per example, and (left,right) context = (13,7)
steps/nnet3/get_egs.sh: copying data alignments
copy-int-vector ark:- ark,scp:exp/nnet3/tdnn1a_sp/egs/ali.ark,exp/nnet3/tdnn1a_sp/egs/ali.scp
LOG (copy-int-vector[5.5.839~8-0c6a]:main():copy-int-vector.cc:83) Copied 18906 vectors of int32.
steps/nnet3/get_egs.sh: Getting validation and training subset examples.
steps/nnet3/get_egs.sh: ... extracting validation and training-subset alignments.
... Getting subsets of validation examples for diagnostics and combination.
steps/nnet3/get_egs.sh: Generating training examples on disk
steps/nnet3/get_egs.sh: recombining and shuffling order of archives on disk
steps/nnet3/get_egs.sh: removing temporary archives
steps/nnet3/get_egs.sh: removing temporary alignments
steps/nnet3/get_egs.sh: Finished preparing training examples
2021-06-23 21:36:12,873 [steps/nnet3/train_dnn.py:276 - train - INFO ] Computing the preconditioning matrix for input features
2021-06-23 21:36:23,354 [steps/nnet3/train_dnn.py:287 - train - INFO ] Computing initial vector for FixedScaleComponent before softmax, using priors^-0.25 and rescaling to average 1
2021-06-23 21:36:23,660 [steps/nnet3/train_dnn.py:294 - train - INFO ] Preparing the initial acoustic model.
2021-06-23 21:36:24,208 [steps/nnet3/train_dnn.py:319 - train - INFO ] Training will run for 3.0 epochs = 36 iterations
2021-06-23 21:36:24,209 [steps/nnet3/train_dnn.py:355 - train - INFO ] Iter: 0/35   Jobs: 2   Epoch: 0.00/3.0 (0.0% complete)   lr: 0.003000  
2021-06-23 21:37:41,807 [steps/nnet3/train_dnn.py:355 - train - INFO ] Iter: 1/35   Jobs: 2   Epoch: 0.08/3.0 (2.8% complete)   lr: 0.002814  
2021-06-23 21:38:29,577 [steps/nnet3/train_dnn.py:355 - train - INFO ] Iter: 2/35   Jobs: 2   Epoch: 0.17/3.0 (5.6% complete)   lr: 0.002640  
2021-06-23 21:39:17,188 [steps/nnet3/train_dnn.py:355 - train - INFO ] Iter: 3/35   Jobs: 2   Epoch: 0.25/3.0 (8.3% complete)   lr: 0.002476  
2021-06-23 21:40:04,595 [steps/nnet3/train_dnn.py:355 - train - INFO ] Iter: 4/35   Jobs: 2   Epoch: 0.33/3.0 (11.1% complete)   lr: 0.002323  
run.pl: job failed, log is in exp/nnet3/tdnn1a_sp/log/train.4.1.log
2021-06-23 21:40:06,225 [steps/libs/common.py:236 - background_command_waiter - ERROR ] Command exited with status 1: run.pl --mem 32G --gpu 1 exp/nnet3/tdnn1a_sp/log/train.4.1.log                     nnet3-train --use-gpu=yes --read-cache=exp/nnet3/tdnn1a_sp/cache.4 --write-cache=exp/nnet3/tdnn1a_sp/cache.5                       --print-interval=10                     --momentum=0.0                     --max-param-change=2.0                     --backstitch-training-scale=0.0                     --l2-regularize-factor=0.5                     --backstitch-training-interval=1                     --srand=4                       "nnet3-copy --learning-rate=0.00232279104804 --scale=1.0 exp/nnet3/tdnn1a_sp/4.mdl - |" "ark,bg:nnet3-copy-egs --frame=5              ark:exp/nnet3/tdnn1a_sp/egs/egs.3.ark ark:- |             nnet3-shuffle-egs --buffer-size=5000             --srand=4 ark:- ark:- |              nnet3-merge-egs --minibatch-size=256,128 ark:- ark:- |"                     exp/nnet3/tdnn1a_sp/5.1.raw

友情链接
版权所有 Copyright(c)2004-2021 锐英源软件
公司注册号:410105000449586 豫ICP备08007559号 最佳分辨率 1024*768
地址:郑州大学北校区院(文化路97号院)内