Datasets:
The dataset preview is not available for this dataset.
Error code: ConfigNamesError Exception: ImportError Message: To be able to use bible-nlp/biblenlp-corpus, you need to install the following dependency: ijson. Please install it using 'pip install ijson' for instance. Traceback: The previous step failed, the error is copied to this step: kind='/config-names' dataset='bible-nlp/biblenlp-corpus' config=None split=None---Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/config_names.py", line 89, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1217, in dataset_module_factory raise e1 from None File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module local_imports = _download_additional_modules( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 221, in _download_additional_modules raise ImportError( ImportError: To be able to use bible-nlp/biblenlp-corpus, you need to install the following dependency: ijson. Please install it using 'pip install ijson' for instance.
This error is unexpected. Please open an issue for direct support.
Dataset Card for BibleNLP Corpus
Dataset Summary
Partial and complete Bible translations in 615 languages, aligned by verse.
Languages
aau, aaz, abx, aby, acf, acu, adz, aey, agd, agg, agm, agn, agr, agu, aia, ake, alp, alq, als, aly, ame, amk, amp, amr, amu, anh, anv, aoi, aoj, apb, apn, apu, apy, arb, arl, arn, arp, aso, ata, atb, atd, atg, auc, aui, auy, avt, awb, awk, awx, azg, azz, bao, bbb, bbr, bch, bco, bdd, bea, bel, bgs, bgt, bhg, bhl, big, bjr, bjv, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bnp, boa, boj, bon, box, bqc, bre, bsn, bsp, bss, buk, bus, bvr, bxh, byx, bzd, bzj, cab, caf, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, crn, crx, cso, cta, ctp, ctu, cub, cuc, cui, cut, cux, cwe, daa, dad, dah, ded, deu, dgr, dgz, dif, dik, dji, djk, dob, dwr, dww, dwy, eko, emi, emp, eng, epo, eri, ese, etr, faa, fai, far, for, fra, fuf, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, gia, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, gul, gum, guo, gvc, gvf, gwi, gym, gyr, hat, haw, hbo, hch, heb, heg, hix, hla, hlt, hns, hop, hrv, hub, hui, hus, huu, huv, hvn, ign, ikk, ikw, imo, inb, ind, ino, iou, ipi, ita, jac, jao, jic, jiv, jpn, jvn, kaq, kbc, kbh, kbm, kdc, kde, kdl, kek, ken, kew, kgk, kgp, khs, kje, kjs, kkc, kky, klt, klv, kms, kmu, kne, knf, knj, kos, kpf, kpg, kpj, kpw, kqa, kqc, kqf, kql, kqw, ksj, ksr, ktm, kto, kud, kue, kup, kvn, kwd, kwf, kwi, kwj, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, leu, lex, lgl, lid, lif, lww, maa, maj, maq, mau, mav, maz, mbb, mbc, mbh, mbl, mbt, mca, mcb, mcd, mcf, mcp, mdy, med, mee, mek, meq, met, meu, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkn, mks, mlh, mlp, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, muy, mva, mvn, mwc, mxb, mxp, mxq, mxt, myu, myw, myy, mzz, nab, naf, nak, nay, nbq, nca, nch, ncj, ncl, ncu, ndj, nfa, ngp, ngu, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nin, nko, nld, nlg, nna, nnq, not, nou, npl, nsn, nss, ntj, ntp, nwi, nyu, obo, ong, ons, ood, opm, ote, otm, otn, otq, ots, pab, pad, pah, pao, pes, pib, pio, pir, pjt, plu, pma, poe, poi, pon, poy, ppo, prf, pri, ptp, ptu, pwg, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, rkb, rmc, roo, rop, rro, ruf, rug, rus, sab, san, sbe, seh, sey, sgz, shj, shp, sim, sja, sll, smk, snc, snn, sny, som, soq, spa, spl, spm, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, tav, tbc, tbl, tbo, tbz, tca, tee, ter, tew, tfr, tgp, tif, tim, tiy, tke, tku, tna, tnc, tnn, tnp, toc, tod, toj, ton, too, top, tos, tpt, trc, tsw, ttc, tue, tuo, txu, ubr, udu, ukr, uli, ura, urb, usa, usp, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbp, wed, wer, wim, wmt, wmw, wnc, wnu, wos, wrk, wro, wsk, wuv, xav, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yml, yre, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zar, zas, zat, zav, zaw, zca, zia, ziw, zos, zpc, zpl, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
Dataset Structure
Data Fields
translation
- languages - an N length list of the languages of the translations, sorted alphabetically
- translation - an N length list with the translations each corresponding to the language specified in the above field files
- lang - an N length list of the languages of the files, in order of input
- file - an N length list of the filenames from the corpus on github, each corresponding with the lang above
ref - the verse(s) contained in the record, as a list, with each represented with:
<a three letter book code> <chapter number>:<verse number>
licenses - an N length list of licenses, corresponding to the list of files above
copyrights - information on copyright holders, corresponding to the list of files above
Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as languages = ['eng', 'fra']
.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use pair='single'
. If only the maximum range pairing is desired use pair='range'
(for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
Sources
- Downloads last month
- 25