--- title: Tagset Completer emoji: 🐢 colorFrom: gray colorTo: gray sdk: gradio sdk_version: 4.19.1 app_file: app.py pinned: false tags: - not-for-all-audience --- ## Frequently Asked Questions (FAQs) Technically I am writing this before anyone but me has used the tool, so no one has asked questions yet. But if they did, here are the questions I think they might ask: ### Why is this space tagged "not-for-all-audience" The "not-for-all-audience" tag informs users that this tool's text output is derived from e621.net data for tag prediction and completion. This measure underscores a commitment to responsible content sharing. ### Does input order matter? No ### Should I use underscores in the input tags? It doesn't matter. The application handles tags either way. ### Why are some valid tags marked as "unseen", and why don't some artists ever get returned? Some data is excluded from consideration if it did not occur frequently enough in the sample from which the application makes its calculations. If an artist or tag is too infrequent, we might not think we have enough data to make predictions about it. ### Are there any special tags? Yes. We normalized the favorite counts of each image to a range of 0-9, with 0 being the lowest favcount, and 9 being the highest. You can include any of these special tags: "score:0", "score:1", "score:2", "score:3", "score:4", "score:5", "score:6", "score:7", "score:8", "score:9" in your list to bias the output toward artists with higher or lower scoring images. ### Are there any other special tricks? Yes. If you want to more strongly bias the artist output toward a specific tag, you can just list it multiple times. So for example, the query "red fox, red fox, red fox, score:7" will yield a list of artists who are more strongly associated with the tag "red fox" than the query "red fox, score:7". ### What calculation is this thing actually performing? Each artist is represented by a "pseudo-document" composed of all the tags from their uploaded images, treating these tags similarly to words in a text document. Similarly, when you input a set of tags, the system creates a pseudo-document for your query out of all the tags. It then uses a technique called cosine similarity to compare your tags against each artist's collection, essentially finding which artist's tags are most "similar" to yours. This method helps identify artists whose work is closely aligned with the themes or elements you're interested in. For those curious about the underlying mechanics of comparing text-like data, we employ the TF-IDF (Term Frequency-Inverse Document Frequency) method, a standard approach in information retrieval. You can read more about TF-IDF on its [Wikipedia page](https://en.wikipedia.org/wiki/Tf%E2%80%93idf).