Bd9JdZddlZddlZddlZddlZddlmZddl m Z m Z m Z m Z mZddlmZmZmZddlmZmZe rerddlZerddlZejeZddd Zd d d d dddd d Zdddd ZedZdZ GddeZ!dS)zTokenization classes for MossN) lru_cache) TYPE_CHECKINGListOptionalTupleUnion)is_tf_availableis_torch_availablelogging) AddedTokenPreTrainedTokenizerz vocab.jsonz merges.txt) vocab_file merges_filezFhttps://huggingface.co/fnlp/moss-moon-003-base/resolve/main/vocab.jsonzEhttps://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/vocab.jsonzLhttps://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/vocab.json)zfnlp/moss-moon-003-basezfnlp/moss-moon-003-sftzfnlp/moss-moon-003-sft-pluginzFhttps://huggingface.co/fnlp/moss-moon-003-base/resolve/main/merges.txtzEhttps://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/merges.txtzLhttps://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/merges.txtic \tttdtddztttdtddzztttdtddzz}|dd}d }td D]8}||vr2|||d |z|dz }9d |D}t t ||S) a8 Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control characters the bpe code barfs on. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. !~¡¬®ÿNrc,g|]}t|S)chr).0ns O/Users/treediagram/Downloads/ChuanhuChatGPT/modules/models/tokenization_moss.py z$bytes_to_unicode..Fs   Q#a&&   )listrangeordappenddictzip)bscsrbs rbytes_to_unicoder*1s U3s88SXX\ * *++d5TCIIPQM3R3R.S.SSVZ[`adeiajajloptluluxyly[z[zV{V{{ AAAB A 4[[ B;; IIaLLL IIdQh    FA  "   B B  r c~t}|d}|ddD]}|||f|}|S)z Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length strings). rrN)setadd)wordpairs prev_charchars r get_pairsr2JsP EEEQIQRR 9d#$$$ Lr c &eZdZdZeZeZeZ ddgZ d fd Z e d Z d Zd Zd!d ZdZdZdZdZd!dedeedeefdZd"dZ d#deeeedddfdededeeedef fd ZdZxZ S)$ MossTokenizera Construct a Moss tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. unk_token (`str`, *optional*, defaults to `<|endoftext|>`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `<|endoftext|>`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `<|endoftext|>`): The end of sequence token. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Moss tokenizer detect beginning of words by the preceding space). input_idsattention_maskreplace <|endoftext|>NFc t|trt|ddn|}t|trt|ddn|}t|trt|ddn|}t|trt|ddn|}tjd ||||||| d| | |_t |d5} tj| |_ dddn #1swxYwYd|j D|_ ||_ t|_d|j D|_t |d5} | dd d } dddn #1swxYwYd | D} t%t'| t)t+| |_i|_||_t3jd |_dS)NF)lstriprstrip)errors unk_token bos_token eos_token pad_tokenadd_prefix_space add_bos_tokenutf-8encodingci|]\}}|| Srrrkvs r z*MossTokenizer.__init__..s>>>A1>>>r ci|]\}}|| SrrrHs rrKz*MossTokenizer.__init__..sHHHdaQHHHr  rcPg|]#}t|$Sr)tuplesplit)rmerges rrz*MossTokenizer.__init__..s(CCCueEKKMM**CCCr zJ's|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+r) isinstancestrr super__init__rCopenjsonloadencoderitemsdecoderr=r* byte_encoder byte_decoderreadrQr%r&r"len bpe_rankscacherBrecompilepat)selfrrr=r>r?r@rArBrCkwargs vocab_handle merges_handle bpe_merges __class__s rrVzMossTokenizer.__init__sJTT]_bIcIcrJyuEEEEir IST]_bIcIcrJyuEEEEir IST]_bIcIcrJyuEEEEir IST]_bIcIcrJyuEEEEir  -'  + *w / / / 3<9\22DL 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3>>););)=)=>>>  ,..HHd.?.E.E.G.GHHH + 0 0 0 @M&++--33D99!B$?J @ @ @ @ @ @ @ @ @ @ @ @ @ @ @CC CCC c*eC OO.D.DEEFF 0:opps$DD  D  0GG G c*t|jSN)r`rZrfs r vocab_sizezMossTokenizer.vocab_sizes4<   r c0t|jfi|jSrm)r%rZadded_tokens_encoderrns r get_vocabzMossTokenizer.get_vocabsDL>>D$=>>>r cf|jvr j|St|}t|}|s|S t|fd}|jvrn8|\}}g}d}|t |kr |||} |||| | }n-#t$r |||dYnwxYw|||krC|t |dz kr-||dz|kr| ||z|dz }n | |||dz }|t |kt|}|}t |dkrnt|}Wd |}|j|<|S)NTcTj|tdS)Ninf)ragetfloat)pairrfs rz#MossTokenizer.bpe..s 1C1CD%PU,,1W1Wr keyrr ) rbrPr2minrar`indexextend ValueErrorr$join) rftokenr.r/bigramfirstsecondnew_wordijs ` rbpezMossTokenizer.bpes DJ  :e$ $U||$ L ($W$W$W$WXXXFT^++"ME6HAc$ii-- 5!,,A OOD1I...AA "OODH---E7e##CIIM(9(9d1q5kV>S>SOOEFN333FAAOODG,,,FAc$ii-- XHD4yyA~~!$9 (:xx~~  5 s7B--'CCcH|jr |jg}ng}||z}||S||z|zSrm)rC bos_token_id)rf token_ids_0 token_ids_1 bos_token_idsoutputs r build_inputs_with_special_tokensz.MossTokenizer.build_inputs_with_special_tokenssD   !./MMM,  M % 33r c8g}tjj|D]{}dfd|dD}|d|dD||S)zTokenize a string.c32K|]}j|VdSrm)r])rr)rfs r z*MossTokenizer._tokenize..s<)*!!$r rDc3K|]}|VdSrmr)r bpe_tokens rrz*MossTokenizer._tokenize..s"TTIiTTTTTTr r})rcfindallrerencoderrrQ)rftext bpe_tokensrs` r _tokenizezMossTokenizer._tokenizes Z$// U UEGG.3ll7.C.CE   TT%9N9Ns9S9STTT T T T Tr cr|j||j|jS)z0Converts a token (str) in an id using the vocab.)rZrvr>)rfrs r_convert_token_to_idz"MossTokenizer._convert_token_to_ids,|t|'7'7'G'GHHHr c6|j|S)z=Converts an index (integer) in a token (str) using the vocab.)r\rv)rfrs r_convert_id_to_tokenz"MossTokenizer._convert_id_to_tokens|&&&r cd|}tfd|Ddj}|S)z:Converts a sequence of tokens (string) in a single string.rc*g|]}j|Sr)r^)rcrfs rrz:MossTokenizer.convert_tokens_to_string..s!===1$+A.===r rD)r=)r bytearraydecoder=)rftokensrs` rconvert_tokens_to_stringz&MossTokenizer.convert_tokens_to_stringsPwwv=======>>EEgVZVaEbb r save_directoryfilename_prefixreturnc tj|s td|ddStj||r|dzndt dz}tj||r|dzndt dz}t|dd 5}|tj |j d d d dzdddn #1swxYwYd}t|dd 5}|dt|j dD][\}} || kr td|d| }|d|dz|dz }\ dddn #1swxYwY||fS)NzVocabulary path (z) should be a directory-rrrwrDrEr|TF)indent sort_keys ensure_asciirMrz#version: 0.2 c|dS)Nrr)kvs rryz/MossTokenizer.save_vocabulary..s Y[\]Y^r rzzSaving vocabulary to zZ: BPE merge indices are not consecutive. Please check that the tokenizer is not corrupted!r}r)ospathisdirloggererrorrVOCAB_FILES_NAMESrWwriterXdumpsrZsortedrar[warning) rfrrr merge_filefrwriterr token_indexs rsave_vocabularyzMossTokenizer.save_vocabularysw}}^,,  LLT^TTT U U U FW\\ oM_s222QbcoQpp  W\\ oM_s222QbcpQqq  *cG 4 4 4 c GGDJt|ATYZZZ]aa b b b c c c c c c c c c c c c c c c *cG 4 4 4  LL* + + ++1$.2F2F2H2HN^N^+_+_+_  ' KK''NNM MMM(E SXXj11D8999                 :%%s%<4C<<DDBGG Gc R|d|j}|s|rd|z}||fS)NrBr})poprB)rfris_split_into_wordsrgrBs rprepare_for_tokenizationz&MossTokenizer.prepare_for_tokenization$s=!::&8$:OPP  "2 :Df~r token_idsz np.ndarrayz torch.Tensorz tf.Tensorskip_special_tokensclean_up_tokenization_spacestruncate_before_patternc tjd|||d|}|)t|dkr|||}|S)a Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`. Args: token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`): List of tokenized input ids. Can be obtained using the `__call__` method. skip_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to remove special tokens in the decoding. clean_up_tokenization_spaces (`bool`, *optional*): Whether or not to clean up the tokenization spaces. If `None`, will default to `self.clean_up_tokenization_spaces` (available in the `tokenizer_config`). truncate_before_pattern (`List[str]`, *optional*, defaults to `None`): A list of regular expression strings that will be used to truncate the returned string. This can be used to remove extra pieces of code (e.g. truncate if observing a comment symbol "#" at the beginning of a new line). An example pattern could be `["^#", re.escape("<|endoftext|>"), "^'''", " "]`. kwargs (additional keyword arguments, *optional*): Will be passed to the underlying model specific decode method. Returns: `str`: The decoded sentence. )rrrNrr)rU_decoder`truncate)rfrrrrrg decoded_textrks rrzMossTokenizer.decode*sp@'uww  3)E      # .37N3O3ORS3S3S==7NOOLr c>dd|D}ttjdtj}t |dkr"d|dttjdtj}t |dkr"d|dddfd|DD}t |dkrdt |SS) Nc^|||}|r|ndS)NrN)searchstart)stringpattern start_posms rfind_rez'MossTokenizer.truncate..find_reWs,vy11A !)177999r )r cLg|]!}tj|tj"Sr)rcrd MULTILINE)rrs rrz*MossTokenizer.truncate..[s&^^^7RZ66^^^r z^printrz^defrcg|] }|dk| S)rNr)rposs rrz*MossTokenizer.truncate..is+   cfjlclclCclclclr c*g|]}|Srr)rterminal completionrrs rrz*MossTokenizer.truncate..js'___GGJ)DD___r )r!rcfinditerrr`rr~) rfrr terminalsprintsdefs terminals_posrrs ` @@rrzMossTokenizer.truncateVs/ * * *_^F]^^^ bk(J EEFF v;;??#$7fQioo&7&7$78JBK BLAABB t99q==#$5d1gmmoo$56J   ______U^___    }   ! !2M 2 223 3 r )r7r8r8r9NFFrm)F)FNN)!__name__ __module__ __qualname____doc__rvocab_files_namesPRETRAINED_VOCAB_FILES_MAPpretrained_vocab_files_map&PRETRAINED_POSITIONAL_EMBEDDINGS_SIZESmax_model_input_sizesmodel_input_namesrVpropertyrorrrrrrrrrTrrrrrintrboolrr __classcell__)rks@rr4r4Xs$$L*!;B$&67 !!+q+q+q+q+q+qZ!!X!???(((T 4 4 4 4III''' &&c&HSM&]bcf]g&&&&:%*-17; **d3i~{RS*"*'+ * "*$s)!4 * ******Xr r4)"rrXrnumpynpregexrc functoolsrtypingrrrrrtransformers.utilsr r r transformers.tokenization_utilsr r torch tensorflowtf get_loggerrrrrrr*r2r4rr rrs## >>>>>>>>>>>>>>KKKKKKKKKKKKKKKKKK     H % %$l"i)w $l"i)w   $"%)**&  0   XXXXX'XXXXXr