Llama tokenizer online. As part of the Llama 3.
Llama tokenizer online. c architecture and many other cool models he has built.
Llama tokenizer online 2 language models use PreTrainedTokenizerFast as their tokenizer. model --output . cpp, special tokens like <s> and </s> are tokenized correctly. Tokenizer from a model preset. Improve this answer. There are 6 other projects in the npm registry using llama-tokenizer-js. Llama tokenizers. 5-72B becomes #1 non-proprietary model by sizeable margin 欢迎来到Llama2中文社区!我们是一个专注于Llama2模型在中文方面的优化和上层建设的高级技术社区。 *基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级*。. We release Code Llama under a permissive license that allows for both research and commercial use. The current code only Apr 7, 2024 · Note that tokens in Llama aren't words but subwords, since the Llama tokenizer uses Byte-Pair Encoding (BPE). 3. js file). cpp has a tokenize function. A simple web app to play with the Llama tokenizer. json there should not be [INST] or <<SYS>> It is entirely possible they originally planned it as the recipe says "Please verify that your tokenizer support adding "[INST]", "[/INST]" to your inputs. py <hf-token> script, in the llama. I've open sourced my JavaScript tokenizers for LLaMA 1, 2 and 3. 2023-10-20 🤗 We release the checkpoints and code of the SEED-2 tokenizer, and SEED-LLaMA-8B/14B. Tokens: 0 Characters: 0. So you can get a very rough approximation of LLaMA token count by using an OpenAI tokenizer. LLaMA3-tokenizer-js is a fork of my earlier LLaMA 1 tokenizer llama-tokenizer-js. > The main problem with web based tokenizers is they need to know the model vocab, which is variable, bulky and unpredictable (even llama can range from 32000 to 32100 or more). like 64 Meta LLaMA (Large Language Model Meta AI) is a state-of-the-art language model developed by Meta, designed to understand and generate human-like text. like 467. Follow 🤗 Tokenizer: The internals of HuggingFace tokenizers! A look at state (what's saved by a tokenizer), data structures (how does it store what it saves), and methods (what functionality do you get). 69 Web site created using create-react-app. “Banana”), the tokenizer does not prepend the prefix space to the string. "the token 123 is identified by the string '<|im_start|>'"). merges (and if some, like merges, are not present), and if there any non-trivial hard coded processing steps not governed by a parameter in the gguf. becomes [ll] [ama] [ llama] [ LL] [AM] [All] [ama] [ L] [lama] [ ll] [amas] Jun 4, 2024 · So I'm wondering if there is a documentation of what exactly llama. ") after using this command python3 . I think you're misunderstanding the point of the tokenizer - you are submitting to the LLM a set of numbers that is encoding your input - the LLM then uses that to build the output piece-by-piece. This online tool uses the same tokenization algorithms as the ones used by the tokenizers of popular large language models (LLMs) like Open AI's GPT-4 and Google Gemini. Based on byte-level Byte-Pair-Encoding. ggml. This approach aims to enhance the model's bilingual capabilities while maintaining efficiency. To download the model weights and tokenizer, please visit the Meta Llama website and accept our License. I can't find it in the files of the model. (Try it here) For instance, the string: llama llama LLAMAllama Llama llamas. token_type, tokenizer. 2023-10-02 📎 We release the technical report of SEED-LLaMA on arXiv, which is empowered by the improved SEED-2 tokenizer. Welcome to 🦙 llama-tokenizer-js 🦙 playground! <s> Replace this text in the input field to see how <0xF0> <0x9F> <0xA6> <0x99> token ization works. Special tokens. c). May 31, 2023 · You signed in with another tab or window. This repo has a Python script for your convenience. The BPE implementation, which is the core of this library, is original work and was adapted into transformers. A preset is a directory of configs, weights and other file assets used to save and load a pre-trained model. 4 days ago · The Llama 3. Below, you'll find a tool designed to show how Mistral models such as Llama 3. It explains how tokens works, in general, one word is one token, however, one word can be split into multiple token in Your best option is to encode your text using the model's tokenizer and get the length of that. ChatGPT also has a token count webpage tool https: parse_special = false will disable usage of special tokens during tokenization. 目标:构建一个更符合语言学的小而美的 llama 分词器,支持中英日三国语言. models. The tokenizer is responsible for translating human language to an input accepted by the A simple web app to play with the Llama tokenizer. The LLaMA tokenizer is a BPE model based on sentencepiece. These models master the art of recognizing patterns among tokens, adeptly predicting the subsequent token in a series. In case of differences a more functional copy is chosen. Clear Show example Show example Well, you can get access to the original file from meta:meta-llama / Llama-2-7b-chat and look at the tokenizer_config. Model card Files Files and versions Community YAML Metadata Warning: empty or missing yaml metadata in repo card (https The LLaMA tokenizer is a BPE model based on sentencepiece. Contribute to CanvaChen/chinese-llama-tokenizer development by creating an account on GitHub. c architecture and many other cool models he has built. py --input . From the GitHub: TokenMonster is an ungreedy tokenizer and vocabulary builder, outperforming tiktoken by 35%. How do you handle the rest of the special tokens? I understand that I can manually add these tokens as special tokens to the tokenizer, but wouldn't I need to make sure their token IDs end up the same as pretraining? The Llama 3. Nov 2, 2024 · Llama is a family of large language models released by Meta AI starting in February 2023. , "Banana"), the tokenizer does not prepend a prefix space to the string. For example, the oobabooga-text-webui exposes an API endpoint for token count. 5-turbo and GPT-4) p50k_base p50k_edit r50k_base I have checked out your code, and you've something called 'tokenizer. 1 tokenizer is a powerful tool for managing tokenization in LLMs, providing flexibility and efficiency in text processing. 2023-10-20 👾 We release an online gradio demo, feel free to use it by yourself. Apr 18, 2024 · A big change in Llama 3 compared to Llama 2 is the use of a new tokenizer that expands the vocabulary size to 128,256 (from 32K tokens in the previous version). 1, Llama 3. 3 has been officially released by Meta in 2024. Subreddit to discuss about Llama, the large language model created by Meta AI. GPT-4 (tiktoken) will also tokenize words and subwords differently. We initialize the model and move it to our CUDA-enabled GPU. cpp development by creating an account on GitHub. Usage. This is crucial for optimizing your prompts and managing computational resources effectively when working with Llama models. The SEED-2 tokenizer can better preserve the rich visual semantics and reconstruct more realistic images. You signed out in another tab or window. Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. fb. "oobabooga/llama-tokenizer"), truncate (whether or not to shorten the text), and max_length (the max length to truncate to) tokenize A notebook on how to fine-tune the Llama 2 model on a personal computer using QLoRa and TRL. llama-tokenizer. We utilize the actual tokenization algorithms used by these models, giving you a precise token count. 1 is a collection of open-source large language models, including a flagship 405B parameter model, and upgraded 8B and 70B models. Changes to the prompt format —such as EOS tokens and the chat template—have been incorporated into the tokenizer configuration which is provided alongside the HF model. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Dec 17, 2023 · The Amharic Llama Tokenizer uses 1/6 the number of tokens for the same Amharic text. Anyone intending to make their own LLM should look into this. This is The LLaMA tokenizer is now available as a JavaScript library that can run in a browser or in Node. [ ] See Andrej Karpathy's repo for the real deal built for llama2. cpp server. LLM inference in C/C++. We release all our models to the research community. The Code Llama model was proposed in Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Jun 28, 2024 · Download Meta Llama 3 ️ https://go. This is a new method of tokenization. 1 release, we’ve consolidated GitHub repos and added some additional repos as we’ve expanded Llama’s functionality into being an e2e Llama Stack. Here’s how you Since releasing llama-tokenizer-js, alternative llama tokenizers have been released. Start using llama-tokenizer-js in your project by running `npm i llama-tokenizer-js`. In this video, become familiar with how the LLaMA tokenizer works, a key component of the model. The change in the conversion process is just to mark what pre-tokenizer should be used for the model, since llama. model, tokenizer. Will they do the same in the API. It's compatible with LLaMa but would require doing the pretraining over again. You can use it to count tokens and compare how different large language model vocabularies work. 2. This is useful when the text that you want to tokenize includes the text of special tokens (e. cpp/exl always tokenize BOS in the token viewer. 🌎; A notebook on how to run the Llama 2 Chat Model with 4-bit quantization on a local computer or Google Colab. It features significant improvements over its predecessor, Llama 2, such as an enhanced tokenizer and a more efficient grouped query attention mechanism. The tokenizer used by LLaMA is a SentencePiece Byte-Pair Encoding tokenizer. Dec 19, 2024 · For instance, the Llama tokenizer online can be a great choice for tasks requiring high accuracy and speed. He disc This bug does not affect all BPE-based models. g. Both are BPE tokenizers despite the language used in the PR. However, the llama-3 tokenizer has only <|begin_of_text|> and <|end_of_text|>. Large language models such as Mistral decode text through tokens—frequent character sequences within a text corpus. The official Meta Llama 3 GitHub site. Please use the following repos going forward: If you have any questions, please Mistral Tokenizer. 4 days ago · tokenizer. Encoding: o200k_base (GPT-4o) cl100k_base (GPT-3. Contribute to meta-llama/llama3 development by creating an account on GitHub. The library is designed for efficiency and speed, allowing you to train a tokenizer on large datasets like wikitext-103 in just a few seconds. Transformers parameters like epsilon_cutoff, eta_cutoff, and encoder_repetition_penalty can be used. pre, tokenizer. It is now about as fast as using llama. This is meta-llama/Meta-Llama-3-70B-Instruct, converted to GGUF without changing tensor data type. The preset can be passed as one I'm trying to apply llama in understanding Korean text. Download Meta Llama 3 ️ https://go. Construct a Llama tokenizer. cpp now supports multiple different pre-tokenizers. Conceptually, pre-training is pretty simple. 2, last published: 6 months ago. me/kbpn54Aston Zhang, research scientist working on Llama at Meta discusses the new tokenizer in Meta Llama 3. GPT2 GPT3. JavaScript tokenizer for LLaMA 3 and LLaMA 3. This is a copy of the llama2 tokenizer for use as a fallback tokenizer for KoboldAI, optimized with defaults for text completion. Oct 20, 2024 · The release of Llama-3. Running The LLaMA tokenizer is a BPE model based on sentencepiece. Below, you'll find a tool designed to show how Gemma models such as Dec 19, 2024 · To build a Llama tokenizer from scratch using the Hugging Face Tokenizers library, you will first need to prepare your dataset. While tiktoken is supposed to be faster than a model's tokenizer, I don't think it has an equivalent for LLaMA's yet. Moreover, the new correct pre-tokenizer llama-bpe is used , and the EOS token is correctly set to <|eot_id|> . Members Online Chatbot Arena Leaderboard Update: Qwen1. This kind of behavior is not generally expected of all tokenizers, but it is expected of the Llama 3 tokenizer, because it has a flag called `ignore_merges` set to true. Using Colab this can take 5-10 minutes to download and initialize the model. Choose from our collection of models: Llama 3. 3 out? Llama 3. CodeLlama Overview. Intended use case is calculating token count accurately on the client-side. JS tokenizer for LLaMA-based LLMs. from llamatokenizer import tokenize as llama_tokenize import json # Possible args: tokenize (the string or filepath to tokenize), tokenizer (hugging face tokenizer to use in the style of [distributor]/[model] e. Llama 1 uses SentencePiece BPE tokenizer whereas Llama 3 uses Tiktoken BPE tokenizer. Action Movies & Series; Animated Movies & Series; Comedy Movies & Series; Crime, Mystery, & Thriller Movies & Series; Documentary Movies & Series; Drama Movies & Series Thank you for developing with Llama models. For many cases where an application is using a Hugging Face (HF) variant of the Llama 3 model, the upgrade path to Llama 3. cpp directly, but with the following benefits: More samplers. json - Tokenizer files; Conclusion. Once you run the -update. 5 / GPT4 LLaMA. Welcome to 🦙 llama3-tokenizer-js 🦙 playground! Use this tool below to understand how a piece of text might be tokenized by Llama 3 models (Llama 3. By using the transformers Llama tokenizer with llama. Additional work is required in order to create LLaMA tokenizer from the sentencepiece tokenizer. LLaMA tokenizer uses the sentencepiece tokenizer, but it is not the same thing as the sentencepiece tokenizer. Once your request is approved, you will receive a signed URL over email. the tokenizer is representing all of the possibilities of the language within a, say, 30k to 50k 'vocabulary' but able to produce a vastly wider array of actual 'words' because those tokens are not Initially noted by Daniel from Unsloth that some special tokens are untrained in the base Llama 3 model, which led to a lot of fine-tuning issues for people especially if you add your own tokens or train on the instruct tokens. Llama 3, Llama 3. LLM Token Counter is a sophisticated tool meticulously crafted to assist users in effectively managing token limits for a diverse array of widely-adopted Language Models (LLMs), including GPT-3. The model sees lots of text, and repeatedly tries Paste oobabooga/llama-tokenizer here and click on Download: Otherwise, run python download-model. Share. 2, Llama 3. This model was contributed by zphang with contributions from BlackSamorez. The Llama 3. \models\llama-3-70b-instruct\output-3-70b-instruct --gqa 8 - I'm probably using the wrong input but using the folder as the input didn't work and I wasn't sure what the I kind of tried to explain, haha. 3 multimodal? Instantiate a keras_hub. 🌎; ⚡️ Inference. In textgen plain llama. Contribute to ggerganov/llama. I also implement a minimal <200 line version of the 🤗 Tokenizer in Python for GPT2. Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws Dec 12, 2024 · Is Llama 3. 1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The respective tokenizer for the model. Tokens are… Discover amazing ML apps made by the community. So I have set it to another Gemma Tokenizer. Dec 27, 2024 · The LLaMA tokenizer is a BPE model based on sentencepiece. As part of the Llama 3. However, the original tokenizer for llama seems to greatly over-estimate the number of tokens. I am guessing this is the tokenizer file for llama. This helps you understand certain model behaviors, like code, multilingual, and prompt performance. 1. Compatibility. Some web applications make network calls to Python applications that run the Huggingface transformers tokenizer. js, which actually introduced a llama tokenizer by integrating llama-tokenizer-js into transformers. The LLama tokenizer has no pad_token set. Note that this is a tokenizer for LLaMA models, and it’s different than the tokenizers used by OpenAI models. How do I get this file for the flan-t5. Hey everyone! I am working on training a custom chatbot based on llama 2 7b. " If you want to modify this library to support a new LLaMA tokenizer (new as in trained from scratch, not using the same tokenizer as most LLaMA models do), you should be able to do so by swapping the vocabulary and merge data (the 2 long variables near the end of llama-tokenizer. 1 8B) and the total count of tokens in that piece of text. Have you ever wanted to inference a baby Llama 3 model in pure C? No? Well, now you can! Run LLaMA 3 8B models with one simple 700-line C file (run. 22 votes, 10 comments. Tiktoken is for the Openai models and will have a different result than a llama model/tokenizer). cpp repo, go in the models/tokenizers folder, you'll find a llama-bpe folder with configs if you ran it properly, replace the ones in the model folder you're converting with the ones there, then quant. I adapted OpenAssistant's prompt format (see here… A LLM, in this case it will be meta-llama/Llama-2-70b-chat-hf. model'. This larger vocabulary can encode text more efficiently (both for input and output) and potentially yield stronger multilingualism. It is part of Meta's broader efforts to advance AI capabilities and integrate them into various applications. Several helper functions used in LLaMA 3 pretokenization were adapted from transformers. 5, GPT-4, Claude-3, Llama-3, and many others. This flag means that when a token corresponding to input text is directly found from vocabulary, we skip normal processing ("we ignore merges") and instead use the token llama. When decoding a sequence, if the first token is the start of a word (e. By leveraging its features Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e Any tokenizer is going to be able to represent any word in multiple different ways depending on where it appears in a sentence. . This process ensures that you have the correct files and configurations to leverage the capabilities of the LLaMA model effectively. llama-token-counter. Custom Tokenization Rules: Depending on the dataset Online LLM Tokenizer. I have no idea what certain backends exactly send to the model. 2 collection from Meta marked an important milestone in the open-source AI world. Once you have installed our library, you can follow the examples in this section to build powerfull applications, interacting with different models and making them invoke custom functions to enchance the user experience. The open-source AI models you can fine-tune, distill and deploy anywhere. We need the vocab data, yes, and don't forget the merge data. The Code Llama model was proposed in Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade This doesn't look like a GGML format file. Pre-Training. \models\llama-3-70b-instruct\tokenizer. 🌎; 🚀 Deploy Welcome to gpt-tokenizer playground! The most feature-complete GPT token encoder/decoder with support for OpenAI models: o1, GPT-4o and GPT-4, GPT-3. This uses notably ByteFallback and no normalization. 🦙 llama-tokenizer-js 🦙 JavaScript tokenizer for LLaMA which works client-side in the browser (and also in Node). A notebook on how to quantize the Llama 2 model using GPTQ from the AutoGPTQ library. I assume this is because llama was not built with Korean in mind. like 21. 5 and others. It accurately counts tokens on the client-side and is easy In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla- 70B and PaLM-540B. In other words, some work has been adapted from llama the-tokenizer-playground. 1 70B, Llama 3 70B, Llama 3. py oobabooga/llama-tokenizer in the terminal Reply reply 1 day ago · For example, the Llama3-replace5 tokenizer variant replaces 5% of the Llama 2 tokenizer vocabulary with the most frequent Arabic tokens from MLV2. The tokenizers are intended for counting tokens on the web client-side, but they Aug 20, 2024 · Llama tokenizer layer is based on SentencePiece so the resulting tokenization such as how the input sequence is split into tokens depends on the statistics of the training data. 1 and Llama 3. With the same input text, llama tokenizer would give 5~6 times more tokens than KoBERT tokenizer. \convert-llama-ggml-to-gguf. 1 models. cpp adds a second BOS token under certain conditions/frontends if it already exists Heh, this kind of thing is a problem and not just in llama. json from any repository on Huggingface. tokens, tokenizer. Start building awesome AI Projects with LlamaAPI. But if you don't have access to that/don't want to load it you can use tiktoken. We'll explain these as we get to them, let's begin with our model. Is Llama 3. This latest version includes models with 8 billion (8B) and 70 billion (70B) parameters. Nov 23, 2023 · This article dive deep into the tokenizer of the model Llama-2–7b-chat-hf. A pure Javascript tokenizer running in your browser that can load tokenizer. Reload to refresh your session. For example, Llama 1 is not affected, even though Llama 1 tokenizer is also BPE-based. He disc The issue was technically not in the tokenizer itself, but in the pre-tokenizer, which is a pre-processing step that is a part of the inference portion of llama. Check out all Code Llama models here and the officially released ones in the codellama org. You can call endpoint using the llama. One notable example is transformers. cpp does with tokenizer. Reply reply Our Llama 3 token counter provides accurate estimation of token count specifically for Llama 3 and Llama 3. js. These models boast improved performance rivaling closed-source alternatives, support a 128K context window, and are multilingual. Large language models such as Gemma decode text through tokens—frequent character sequences within a text corpus. ```python The open-source AI models you can fine-tune, distill and deploy anywhere. 1 should be straightforward. Many examples have it being set to EOS token, but the comment I linked said it shouldn't be. It is essential to understand its quirks to optimize tokenization effectively. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e. See the other reply for a llama tokenizer. Type or paste the text you want to analyze into the text area and our calculator will automatically calculate the number of tokens in the text. By following these steps, you can successfully download and prepare the LLaMA weights for use in your applications. Choose a model to target. You switched accounts on another tab or window. We upgraded the SEED visual tokenizer (find the initial version here) and proposed SEED-LLaMA-8B/14B foundation models. We aim to keep this copy functional / identical to the upstream llama2 tokenizer with minor differences in its defaults. Works client-side in the browser, in Node, in TypeScript codebases, in ES6 projects, and in CommonJS projects. Jan 16, 2024 · Abstract. json and tokenizer_config. Llama. Welcome to 🦙 llama-tokenizer-js 🦙 playground! Replace this text in the input field to see how 🦙 tokenization works. Latest version: 1. 1 Community License allows for these use cases. cpp.
zlgp qiv nyutx shx vbxq oui ohcq gzc aggbzq bsh
{"Title":"What is the best girl
name?","Description":"Wheel of girl
names","FontSize":7,"LabelsList":["Emma","Olivia","Isabel","Sophie","Charlotte","Mia","Amelia","Harper","Evelyn","Abigail","Emily","Elizabeth","Mila","Ella","Avery","Camilla","Aria","Scarlett","Victoria","Madison","Luna","Grace","Chloe","Penelope","Riley","Zoey","Nora","Lily","Eleanor","Hannah","Lillian","Addison","Aubrey","Ellie","Stella","Natalia","Zoe","Leah","Hazel","Aurora","Savannah","Brooklyn","Bella","Claire","Skylar","Lucy","Paisley","Everly","Anna","Caroline","Nova","Genesis","Emelia","Kennedy","Maya","Willow","Kinsley","Naomi","Sarah","Allison","Gabriella","Madelyn","Cora","Eva","Serenity","Autumn","Hailey","Gianna","Valentina","Eliana","Quinn","Nevaeh","Sadie","Linda","Alexa","Josephine","Emery","Julia","Delilah","Arianna","Vivian","Kaylee","Sophie","Brielle","Madeline","Hadley","Ibby","Sam","Madie","Maria","Amanda","Ayaana","Rachel","Ashley","Alyssa","Keara","Rihanna","Brianna","Kassandra","Laura","Summer","Chelsea","Megan","Jordan"],"Style":{"_id":null,"Type":0,"Colors":["#f44336","#710d06","#9c27b0","#3e1046","#03a9f4","#014462","#009688","#003c36","#8bc34a","#38511b","#ffeb3b","#7e7100","#ff9800","#663d00","#607d8b","#263238","#e91e63","#600927","#673ab7","#291749","#2196f3","#063d69","#00bcd4","#004b55","#4caf50","#1e4620","#cddc39","#575e11","#ffc107","#694f00","#9e9e9e","#3f3f3f","#3f51b5","#192048","#ff5722","#741c00","#795548","#30221d"],"Data":[[0,1],[2,3],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[6,7],[8,9],[10,11],[12,13],[16,17],[20,21],[22,23],[26,27],[28,29],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[36,37],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[2,3],[32,33],[4,5],[6,7]],"Space":null},"ColorLock":null,"LabelRepeat":1,"ThumbnailUrl":"","Confirmed":true,"TextDisplayType":null,"Flagged":false,"DateModified":"2020-02-05T05:14:","CategoryId":3,"Weights":[],"WheelKey":"what-is-the-best-girl-name"}