Instruction
stringlengths
0
86.4k
Response
stringlengths
0
127k
I used the same code shared above (you should set QUANTIZE = "int8"). To cralify, int8 quantization result in about 20% slower inference time. local evaluation time is: float32: 167 s int8: 200 s
int8 quantization turned out to be seriously slower in the Kaggle's evaluation environment. it took far more longer time compared to local evaluation. below is a comparison of the actual submission time: quantization local eval time (sec) estimated submission time (min) actual submission time (min) tf.float32 167 - 44 tf.int8 257 67.7 93 (timeout errror)
Bunun için biraz daha gelişmem gerekiyor. Deneyeceğim teşekkürler.
Tamam, başarılar dilerim! ,"İletişimde kalalım".
Imagine that you are looking for a path to the top of a mountain. Batch gradient descent is like climbing a mountain taking in all the scenery around you before deciding on your next move. You analyze all the factors, such as slope gradient, elevation, ground conditions, and so on before deciding on your next move. However, the downside of this method is that it takes you longer and requires a lot of effort and time to process all the information before moving on. Meanwhile, stochastic gradient descent is like climbing a mountain seeing only a small part of the scenery around you before deciding on your next move. You may only see one rock in front of you or a few trees to your right and decide your next move based on that information. The advantage of this method is that you can reach the top of the mountain faster because you don't need to process all the information thoroughly, but can rely on information obtained randomly from a small portion of data. However, there is also the risk of getting lost or not finding the best path because you haven't considered all the factors.
Really interesting answer. Thank you
int8 quantization turned out to be seriously slower in the Kaggle's evaluation environment. it took far more longer time compared to local evaluation. below is a comparison of the actual submission time: quantization local eval time (sec) estimated submission time (min) actual submission time (min) tf.float32 167 - 44 tf.int8 257 67.7 93 (timeout errror)
Thank you! how did you measure actual submission time??
Now, I am a Kaggle Discussion expert. Thank you Kagglers to stay beside me❤️❤️ Happy Coding! 💯 Huge Thanks to All Kagglers for upvoting.😍.
Congratulations , wish you the best in your next endeavors!
Nicely done visualizations, well done work! I suggest adding some explanations to your work so readers can understand the findings. Thanks for sharing 😎👊. Upvoted!!
I'll surely add some explanation. Thankyou so much!
Hello dear community, I apologize in advance because I do not speak English and so far I am learning. I would like to ask your advice for the cleaning of a dataset I have, I want to improve in this area. I have a dataset: df.shape = (39058, 13) From it there are different empty data. df.isnull().sum() Day 0 Month 0 Year 0 Department 1781 Municipality 1819 Place of Occurrence 8772 Modality 6318 Reason for Reasoned 20094 Presumed Perpetrator 0 Confirmed Perpetrator 57 Demand 21804 Type of Unlinked 10449 First Source 43 dtype: int64 What I have done is to delete the columns that have a lot of empty data and also group the day, month and year columns into one called date by reformatting it with the line pd.to_datetime. Finally, these are the remaining columns I have. Day 0 Month 0 Year 0 Department 1781 Municipality 1819 Mode 6318 Presumed Perpetrator 0 Confirmed Perpetrator 57 First Source 43 Date 0 dtype: int64 I am considering using the fillna(method="bfill",axis=0) line in the Confirmed Author and First Source columns because they have little missing data. Finally, I am asking for guidance on what I can do and if what I have done is correct, I don't know how to treat the Department, Municipality and Modality columns. I also do not know if what I have done so far is correct. Thanks.
Rule of filling missing values is if missing values are less than or equal to 5% of the data you can fill it with 0 else you can fill data with mean of the column or median depend on your data for further share your notebook link to check your work 😊
You grabbed attention! Nice title and information
Thanks
and the benefit of k-fold cross validation is reducing overfitting, thanks
Thanks I really appriciate your Interaction
learned so much from this,thank you for sharing it.
I am glad it Helps to Understand you something, Thanks
great notebook for explaining and practicing k-fold cross validation, thanks
Thanks, It is a very important concept and i tried to explain it well
There are many excellent visualization libraries available, and the "best" one depends on the programming language you are using and your specific requirements. Here's a list of popular visualization libraries for different languages: Python: Matplotlib: A versatile and widely-used library for creating static, interactive, and animated visualizations in Python. Seaborn: A high-level library based on Matplotlib that simplifies the creation of complex and aesthetically pleasing statistical plots. Plotly: A library for creating interactive and web-based visualizations that can be rendered in Jupyter Notebooks or standalone HTML files. R: ggplot2: A popular and powerful library for creating complex and aesthetically pleasing visualizations using a "grammar of graphics" approach. Shiny: A web application framework that allows you to build interactive web applications and dashboards for data visualization and exploration. Plotly (R): The R version of the Plotly library, allowing you to create interactive and web-based visualizations in R.
Thank you Épanouissement for your guidance
Amazing job.
Thanks , Your feedback and support always motivate me👍
Very glad to share that I became a discussions expert on Kaggle today. Just love being a part of this wonderful community on Kaggle 🧡. Everyone is so helpful and kind, it makes me wonder why can't everyone be like Kagglers? XD Onto better things 😊 (p.s. Do checkout my profile and work and leave your valuable feedbacks <3)
This is very inspiring
Excellent job! Please review my notebooks and provide your feedback. Thank you for your assistance
Will surely do, thanks a lot!
Hello Kagglers!👋 Finally, I'm now Triple Expert "3X" on kaggle ( Discussion, Notebooks and Datasets Expert)✌️✌️. I joined kaggle about two or more years ago, at first I didn't know how to use kaggle, but over time I did a search about kaggle, and then I used it to get a dataset to work on during my studies in the field of machine learning, but with time I learned that I can upload projects for machine learing and I did that and lately I got a lot of interest in kaggle, I made more machine learning projects and uploaded them on kaggle, there were a lot of data science and machine learning engineers supporting me and upvote on my notebooks and also They were giving me advice to become more effective and also to do more of my work. I thank them all for their support And the advice they give me in order to improve my business and to do more. But I'm looking forward to becoming a kaggle master and making more notebooks to help other than that, but with your support, thank you.❤️🤝 Thanks kaggle❤️✌️
Very Inspiring
Now, I am a Kaggle Discussion expert. Thank you Kagglers to stay beside me❤️❤️ Happy Coding! 💯 Huge Thanks to All Kagglers for upvoting.😍.
Best wishes 😊
thanks for the material, I bookmarked it
Good I hope you will use it in future
What do you think about the fact that all homes listed in the Iowa data are several years old. What do you think explains this? How could you tell if you are right? Is that a problem?
Data is too old. seems 13 year old
This is a very well curated path , keep up the fantastic work!
Thank you Ravi
This is great. Time to download all the PDFs just in case the original links 404 🙈
Sure. I have attached the full book too so if any link becomes inaccessible keep that one for safer side. Happy learning !
, thanks for sharing! Was giving NLP year ago based on this materials. https://web.stanford.edu/~jurafsky/slp3/ has presentations as well
Great…Thanks for sharing that
great resource.
Thank you Sadik 👍 Happy Learning
It's a good resource! thanks for sharing . Keep it up
Glad you find it useful
thank you for the NLP learning flow, I happen to be deepening NLP
Happy to help
wow this is a gem. Thanks for sharing
Happy learning !
very good job! Thanks!
Thanks a lot !
This dataset is for detecting text, therefore, it only bounding box about text. It hasn't label of text content.
thanks for your response , can you provide some description about dataset , i am not able to understand the annotations in .txt files.And also if you can help, is there any else source to download the coco text dataset , i want to build a recognition model but i am unable to find one
I wouldn't say that Kaggle has helped me 'find' a job. But it certainly helps set me apart. Three reasons: Much of my machine learning expertise comes from Kaggle. Machine learning hasn't been a huge part of my data science career because there hasn't been the business case for it to be used. But consistently applying myself in competitions has greatly developed my ability to perform in the technical interview. Kaggle has developed my coding, engineering, and analytical skills. The process of becoming a notebooks master was a lot of hard work. Consistently applying myself to new problems and challenges in a way that was valuable for the community was almost like getting my daily workout. Trying to level up in ALL categories requires you to stretch yourself beyond what you think you are capable of. Kaggle allows me to stay current on the latest/greatest machine learning tools and methods. Jump into a competition and you'll quickly find that if something you're doing is outdated, it will show on the leaderboard. Competition discussions and notebooks are very rich sources of current applied machine learning methods.
THANK YOU SO MUCH for detailed info about the kaggle and its uses i will surely use kaggle in better way from now onwards
Title. It's been a while since I've had to use SQL and would like to get in some practice to familiarize myself with it again.
I did all the problems on Data Lemur to get back into writing SQL. Nearly all of them have some unique solution that teaches you something new.
Hi, friend 👋 That's a good question 👀 It is important to understand that, first of all, your skills and knowledge in the field of machine learning are important. However, it is no secret to anyone that they meet a person by their clothes. If your profile has good competition results, datasets, training notebooks and various discussions, somewhere you helped someone or wanted to learn something yourself, then this is guaranteed to add a desire to hire you. Try and you will succeed 👌👌 Below I will leave a link to the work on the design of the profile. A beautiful profile is not so important, but it can initially show you from a good side. https://www.kaggle.com/code/mersico/how-to-make-a-beautiful-profile-easy I hope this was useful 😊
thank you so much friend
Hey I applied for job with just a contributor kaggle profile. Resume is your top priority Everything depends on your interviewer. If he/she thinks kaggle is a good indicator of your skill then it is important. I know many talented people in data science who don't have a profile/ have an inactive kaggle profile.
okay will consider this fact also
The following are CNN's most essential steps: Step1: Build and assess the CNN model using Keras/Tensorflow or pytorch modules. Step2: Import dataset from the working directory and name it. Step3: Preprocess images before running CNN. Shuffle the dataset to avoid repating classes Step4:Build your first CNN model using keras Sequential API and keras.layers Conv2D Layer Step5: Train the model using model.fit() and assess the test dataset Step6:Finally you have trained and assesed a keras based CNN model,Use model.save to save the model in .h5 format. To know about details please visit:👉 wikipedia you can also visit 1. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville (05/07/2015) Neural Networks and Deep Learning by Michael Nielsen (Dec 2014) Deep Learning by Microsoft Research (2013) Deep Learning Tutorial by LISA lab, University of Montreal (Jan 6 2015) for better learning. Happy Learning
Such a brief and wonderful summary! It's always a nice way to learn visually.
Hey I think that a good Kaggle profile will help you stand out among the other candidates but also you need to have a good command of Data Science skills.
okayy will keep that in mind
I think this allows you to aggregate sleep times later by grouping by date. Indeed, the watch records the sleep time "by pieces". If for example I sleep from 02/18/2015 at 11 p.m. to 02/19/2015 at 7 a.m., then the watch's sleep time records can have as "startDate" 02/18/2015 AND 02/19/2015 although we are talking about the same night. To overcome this problem, startDates can be shifted by 12 hours to ensure that all recordings for the same night have the same start date. In this case the recordings of the night from 02/18/2015 to 02/19/2015 will be used to construct the sleep time for the date of 02/18/2015.
Thank you for your detailed answer🙏
Amazing work! i learn a lot about working with audio data, especially using cnn. thx
Thanks a lot for appreciating this notebook , glad you found this notebook useful !!!
Hey Arya! I would also like to contribute to this dataset! I love dad jokes. I am currently exploring it and I think we could do some improvements like adding two more columns separating the text by premise and punchline. What do you think?
Hey there , sounds amazing, would love to collaborate on that for real. Please feel free ot connect with me via email, and lets set up a new dataset having you as a joint collaborator on kaggle!
thanks for your response , can you provide some description about dataset , i am not able to understand the annotations in .txt files.And also if you can help, is there any else source to download the coco text dataset , i want to build a recognition model but i am unable to find one
I update that at the description of the dataset. Please check that. As for recognize text model, please search ocr key word at the Internet.
I. Introduction II. Insufficient hyperparameter range III. High dimensional search space IV. Insufficient training data V. Non-stationary data distribution VI. Conclusion Hyperparameter tuning is an essential step in the development of any machine learning model. It involves adjusting the values of model parameters that cannot be learned from data, such as learning rate, regularization strength, and number of hidden units in a neural network. However, even after rigorous hyperparameter tuning, a model's performance may not improve as expected. In this article, we will discuss some of the reasons why hyperparameter tuning may not improve model performance. Insufficient hyperparameter range One possible reason why hyperparameter tuning may not lead to improved performance is that the range of values explored is too narrow. If the hyperparameter space is too constrained, the tuning process may not discover the optimal set of hyperparameters that would improve performance. In such cases, it may be necessary to broaden the range of hyperparameter values to be explored. High dimensional search space Hyperparameter tuning becomes challenging when the search space is high dimensional, meaning that there are many hyperparameters to tune. In such cases, the tuning process can become computationally expensive and time-consuming, and it may not be possible to explore the entire search space exhaustively. This can lead to suboptimal or even poor hyperparameter choices, resulting in a lack of improvement in model performance. Insufficient training data Another possible reason why hyperparameter tuning may not lead to improved performance is a lack of sufficient training data. In machine learning, more data typically leads to better performance, and the same holds for hyperparameter tuning. With insufficient training data, the tuning process may not be able to identify the optimal hyperparameters that would generalize well to unseen data. In such cases, it may be necessary to acquire more training data or to use data augmentation techniques to increase the effective size of the training set. Non-stationary data distribution In some cases, the underlying data distribution may change over time, making it difficult for hyperparameter tuning to improve model performance. For example, in online learning settings, the distribution of data may change as new data becomes available, and the optimal set of hyperparameters may also change. In such cases, it may be necessary to re-evaluate the optimal set of hyperparameters periodically or to use adaptive learning algorithms that can adjust hyperparameters dynamically. Hyperparameter tuning is an essential step in developing accurate machine learning models. However, it is not a guaranteed process, and there are several reasons why hyperparameter tuning may not lead to improved performance. In this article, we have discussed some of the possible reasons, including insufficient hyperparameter range, high dimensional search space, insufficient training data, and non-stationary data distribution. It is essential to be aware of these factors and to carefully consider them when tuning hyperparameters.
I loved this thread . Found this helpful. I'll keep this in mind. In the case of unstructured data like images or audio, tuning hyperparameters for large models becomes an even more challenging task.
Hello !!! Brilliant approach!! one question .. do we need to apply Feature scaling on the categorical data as you have applied the Standard scalar??
As little as I know from my short journey into Data Science learning, I think scaling the categorcial data(after one hot encode) does not impact them. Thats the reason instead of specifying the numberic columns to which the standard or any scaling is applied, a general practice is to select the entire data frame and apply the scaling. Anyway it would not hurt the one hot encoded categorical columns. however, Rumi sir, with his vast experience can throw some light here.
That is a clear notebook in terms of explaining!!👌 I like it.
Thank you for your kind comments!
, looks like you are working on remote sensing image processing tasks. 🙂
That’s right 👍
good point. upvoted.
Thank You . Thank you for upvote.
Luckily I opened your notebook. Thanks for giving the information bro.
Thank You . You could check this logic in every dataset before start ur work.
Good compilation . This helps the people who want to start learning NLP 👍
Thanks Srikanth, Hopefully yes!
I. Introduction II. Insufficient hyperparameter range III. High dimensional search space IV. Insufficient training data V. Non-stationary data distribution VI. Conclusion Hyperparameter tuning is an essential step in the development of any machine learning model. It involves adjusting the values of model parameters that cannot be learned from data, such as learning rate, regularization strength, and number of hidden units in a neural network. However, even after rigorous hyperparameter tuning, a model's performance may not improve as expected. In this article, we will discuss some of the reasons why hyperparameter tuning may not improve model performance. Insufficient hyperparameter range One possible reason why hyperparameter tuning may not lead to improved performance is that the range of values explored is too narrow. If the hyperparameter space is too constrained, the tuning process may not discover the optimal set of hyperparameters that would improve performance. In such cases, it may be necessary to broaden the range of hyperparameter values to be explored. High dimensional search space Hyperparameter tuning becomes challenging when the search space is high dimensional, meaning that there are many hyperparameters to tune. In such cases, the tuning process can become computationally expensive and time-consuming, and it may not be possible to explore the entire search space exhaustively. This can lead to suboptimal or even poor hyperparameter choices, resulting in a lack of improvement in model performance. Insufficient training data Another possible reason why hyperparameter tuning may not lead to improved performance is a lack of sufficient training data. In machine learning, more data typically leads to better performance, and the same holds for hyperparameter tuning. With insufficient training data, the tuning process may not be able to identify the optimal hyperparameters that would generalize well to unseen data. In such cases, it may be necessary to acquire more training data or to use data augmentation techniques to increase the effective size of the training set. Non-stationary data distribution In some cases, the underlying data distribution may change over time, making it difficult for hyperparameter tuning to improve model performance. For example, in online learning settings, the distribution of data may change as new data becomes available, and the optimal set of hyperparameters may also change. In such cases, it may be necessary to re-evaluate the optimal set of hyperparameters periodically or to use adaptive learning algorithms that can adjust hyperparameters dynamically. Hyperparameter tuning is an essential step in developing accurate machine learning models. However, it is not a guaranteed process, and there are several reasons why hyperparameter tuning may not lead to improved performance. In this article, we have discussed some of the possible reasons, including insufficient hyperparameter range, high dimensional search space, insufficient training data, and non-stationary data distribution. It is essential to be aware of these factors and to carefully consider them when tuning hyperparameters.
Amazing analysis. Thank you for sharing ✨
Restaurant Rating has become the most commonly used parameter for judging a restaurant for any individual. Rating of a restaurant depends on factors like reviews, area situated, average cost for two people, votes, cuisines and the type of restaurant. The main goal of this project is to get insights on restaurants which people like visit and understand the complex relationship between features and how they influence Restaurant ratings Restaurant Rating Prediction App Link Kaggle Notebook Github Link This project is deployed on Streamlit as app, you should explore and play around with the app to experience the tremendous value addition by model deployment.
It's so amazing work! thanks for your sharing notebook
This will improve the results on that specific testing set, but only because you are using information from the testing data to influence the training results.
Oh, I tried doing the same with train and validation data in a competition a while ago. Didn't go well. The model got wrong most of the test results in the end.
Since we have so little data this week, it seems more 'beginner' ML algorithms will perform just as well as SOTA tabular algorithms (i.e. XGBoost). So I decided to plot the ROC-AUC score for each possible feature combination. Unsurprisingly, from other discussion posts (and notebooks), calc is an important feature. For a logistic regression model, we see any feature combination containing calc will generally be good Further work is needed in case tuning offers a different viewpoint.
In my experiment LightGBM also showed the best CV for gravity,ph,calc as well. If it could be of any help to anyone.
Great explanation! Thank you for throwing light on such an important topic.
Pleased to you
Tokenization Tokenization is the process of tokenizing or splitting a string, text into a list of tokens. One can think of token as parts like a word is a token in a sentence, and a sentence is a token in a paragraph. Key points of the article – Text into sentences tokenization Sentences into words tokenization Sentences using regular expressions tokenization Code #1: Sentence Tokenization – Splitting sentences in the paragraph Each paragraph contains a large number of sentences .In preprocessing step , it is not easy to process a paragraph .so sentence tokenizer seperate each sentence from large paragraph. nltk.tokenize sent_tokenize text = sent_tokenize(text) Output: [, , ] Code #2: Word Tokenization – Splitting words in a sentence. Each sentence contains many words . Word tokenizer seperate each word which helps to vectorize each word . nltk.tokenize word_tokenize text = word_tokenize(text) output: [, , , , , , ] FIGURE 1: A black box representation of a tokenizer. The text of these three example text fragments has been converted to lowercase and punctuation has been removed before the text is split. Code #3: PunktWordTokenizer – It doesn’t separates the punctuation from the words. nltk.tokenize PunktWordTokenizer tokenizer = PunktWordTokenizer() tokenizer.tokenize() output: [, , , , , , , ] Code #4: WordPunctTokenizer – It separates the punctuation from the words. nltk.tokenize WordPunctTokenizer tokenizer = WordPunctTokenizer() tokenizer.tokenize() output: [, , , , , , , , , ] Code #5: Using Regular Expression nltk.tokenize regexp_tokenize text = regexp_tokenize(text, ) output: [, , , , ] Ref: https://www.tokenizer.cc/ https://www.geeksforgeeks.org/nlp-how-tokenizing-text-sentence-words-works/ https://smltar.com/tokenization.html https://www.analyticsvidhya.com/blog/2022/01/guide-for-tokenization-in-a-nutshell-tools-types/
Impressive 👍
For regularization, do you use both alpha and lambda or only the former?
How to chose the value between alpha and lambda ?
Impressive 👍
Delighted to you apu
Thanks for kindly explaining the principle of conv2d in detail. There is an interesting site about this. Please refer to http://playground.tensorflow.org
you are most salutation .In my next topic I will try to catch information from this http://playground.tensorflow.org site and refer to this .
The "nd" is the tricky part and a legal gray area, that's why we removed those from the dataset. Strictly speaking, extracting chunks from an XC recording is a derivative and therefore not permitted by the license.
Hm Do you mean that by-nc-nd originally comes from by-nc-sa and are simply sub-chunks of original recordings? Although what about by-sa or by ? And maybe you have a link with Docs about these licenses (it will be interesting to read) Thanks in advance!
I. Introduction II. Insufficient hyperparameter range III. High dimensional search space IV. Insufficient training data V. Non-stationary data distribution VI. Conclusion Hyperparameter tuning is an essential step in the development of any machine learning model. It involves adjusting the values of model parameters that cannot be learned from data, such as learning rate, regularization strength, and number of hidden units in a neural network. However, even after rigorous hyperparameter tuning, a model's performance may not improve as expected. In this article, we will discuss some of the reasons why hyperparameter tuning may not improve model performance. Insufficient hyperparameter range One possible reason why hyperparameter tuning may not lead to improved performance is that the range of values explored is too narrow. If the hyperparameter space is too constrained, the tuning process may not discover the optimal set of hyperparameters that would improve performance. In such cases, it may be necessary to broaden the range of hyperparameter values to be explored. High dimensional search space Hyperparameter tuning becomes challenging when the search space is high dimensional, meaning that there are many hyperparameters to tune. In such cases, the tuning process can become computationally expensive and time-consuming, and it may not be possible to explore the entire search space exhaustively. This can lead to suboptimal or even poor hyperparameter choices, resulting in a lack of improvement in model performance. Insufficient training data Another possible reason why hyperparameter tuning may not lead to improved performance is a lack of sufficient training data. In machine learning, more data typically leads to better performance, and the same holds for hyperparameter tuning. With insufficient training data, the tuning process may not be able to identify the optimal hyperparameters that would generalize well to unseen data. In such cases, it may be necessary to acquire more training data or to use data augmentation techniques to increase the effective size of the training set. Non-stationary data distribution In some cases, the underlying data distribution may change over time, making it difficult for hyperparameter tuning to improve model performance. For example, in online learning settings, the distribution of data may change as new data becomes available, and the optimal set of hyperparameters may also change. In such cases, it may be necessary to re-evaluate the optimal set of hyperparameters periodically or to use adaptive learning algorithms that can adjust hyperparameters dynamically. Hyperparameter tuning is an essential step in developing accurate machine learning models. However, it is not a guaranteed process, and there are several reasons why hyperparameter tuning may not lead to improved performance. In this article, we have discussed some of the possible reasons, including insufficient hyperparameter range, high dimensional search space, insufficient training data, and non-stationary data distribution. It is essential to be aware of these factors and to carefully consider them when tuning hyperparameters.
Hi 👋 It is important to remember that when teaching, you essentially have an abstract control panel for all learning processes. I mean this by all the parameters that directly or indirectly affect training (the size of the training, test and validation sets, the number of hidden layers, normalization parameters, etc.) Changing any parameter can either improve the results or not give a result. Therefore, you need to try something to change gradually. Improvement may not occur for various reasons For example: Small amount of data (in this case, training may not be worth the time spent at all) The architect of the neural network ran into the limits of its capabilities The optimizer may not be the best fit for a given architecture and task. We need more experiments and iteration of hyperparameters. All the same, machine learning is about the imperial approach It is important to check the quality of the data (this is worth doing in the first place, as you may have a strong imbalance of classes or the data is noisy) etc. I hope this was useful 👀
Since we have so little data this week, it seems more 'beginner' ML algorithms will perform just as well as SOTA tabular algorithms (i.e. XGBoost). So I decided to plot the ROC-AUC score for each possible feature combination. Unsurprisingly, from other discussion posts (and notebooks), calc is an important feature. For a logistic regression model, we see any feature combination containing calc will generally be good Further work is needed in case tuning offers a different viewpoint.
Thanks for great set of combinations!
Dummy Variables A dummy variable (aka, an indicator variable) is a numeric variable that represents categorical data, such as gender, race, political affiliation, etc. Technically, dummy variables are dichotomous, quantitative variables. Their range of values is small; they can take on only two quantitative values. As a practical matter, regression results are easiest to interpret when dummy variables are limited to two specific values, 1 or 0. Typically, 1 represents the presence of a qualitative attribute, and 0 represents the absence. Dummy variables are another way in which the flexibility of regression can be demonstrated. By incorporating dummy variables with a variety of functional forms, linear regression allows for sophisticated modeling of data. Want to know more on this topic in a descriptive way with example - Click this notebook 📌Dummy Variables & One Hot Encoding in ML📌 One Hot Encoding One hot encoding is a technique used to represent categorical variables as numerical values in a machine learning model. The advantages of using one hot encoding include: It allows the use of categorical variables in models that require numerical input. It can improve model performance by providing more information to the model about the categorical variable. It can help to avoid the problem of ordinality, which can occur when a categorical variable has a natural ordering (e.g. “small”, “medium”, “large”). it works very well unless your categorical variable takes on a large number of values (i.e. you generally won't it for variables taking more than 15 different values. It'd be a poor choice in some cases with fewer values, though that varies.) One hot encoding creates new (binary) columns, indicating the presence of each possible value from the original data. Let's work through an example. The values in the original data are Red, Yellow and Green. We create a separate column for each possible value. Wherever the original value was Red, we put a 1 in the Red column. Want to know more on this topic in a descriptive way with example - Click this notebook 📌Dummy Variables & One Hot Encoding in ML📌 Ref: https://www.formpl.us/blog/nominal-ordinal-data https://www.geeksforgeeks.org/ml-dummy-variable-trap-in-regression-models/ https://www.educative.io/blog/one-hot-encoding https://www.youtube.com/watch?v=9yl6-HEY7_s&list=PLeo1K3hjS3uvCeTYTeyfe0-rN5r8zn9rw&index=8 https://scikit-learn.org/stable/
Nice explanation apu
Hello Kagglers, With the rise of popular text generating AI models such as ChatGPT and Bard we think it’s important to share clear rules and principles for their use on the Kaggle forums. You may have seen that Stack Overflow banned the use of ChatGPT, here is a quote from them which highlights their rationale: “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce.” At Kaggle our discussion forum is primarily a place for our community to connect, learn from each other, and get answers to specific questions. When these AI tools are used to post content that is incorrect, is off topic, or is spammy in general - it harms our community and makes our discussion forums a less useful place for everyone. As such we are publishing the following guidelines on the use of AI generated text on Kaggle: 1. Posts must be verified and meaningful. All content shared on Kaggle must comply with our community guidelines, and this continues to be true for generated content. If you are using generated text to answer a question, it is your responsibility to ensure it is meaningful, verifiably true, and on topic. In particular we will be taking action against any user who generates a high volume of meaningless posts using these tools. 2. All generated text must be labeled. Everyone deserves to know if they are reading a post written by a human or a computer. We are asking that all posts which use generated text must clearly state the tool they used to create the text at the start of the post. You should also include the prompt whenever that adds valuable context to the information you share. Labeling is required but is not sufficient to post generated text, you must also ensure your post is meaningful. As this is a rapidly evolving space, we expect our policies and community norms around the use of generative AI models to also evolve. Please continue to share your feedback on the relationship you’d like to see our community establish with this technology. Thank you!
good information
I am sure most of you have seen the hype around Polars recently. The main highlight is the performance. Take a look at the TPCH benchmark below. 🏎 Results including reading parquet (lower is better) (source) Results starting from in-memory data (lower is better) (source) Thankfully the API for Polars is near identical to Pandas. Which makes learning and experimenting with the library just that much easier! I started using Polars on a new project and was pleasantly surprised. A few Polars vs Pandas comparisons: df_pandas = pd.read_csv() df_pandas.head() df_pandas.groupby()[].agg() df_pandas[].value_counts() df_polars = pl.read_csv() df_polars.head() df_polars.groupby().agg([pl.mean()]) df_polars[].value_counts() Polars is beneficial to become familiar with. It is perfect for data that is too big for Pandas to handle or too small to consider using PySpark for. Let me know if you are already experimenting with Polars and your thoughts on it so far. And if you haven’t yet, what are you waiting for? 🐻‍❄️ Resources: https://www.pola.rs https://pola-rs.github.io/polars-book/user-guide/ https://towardsdatascience.com/an-introduction-to-polars-for-pandas-users-2a52b2a03017 https://seattledataguy.substack.com/p/why-is-polars-all-the-rage
I heard about Polars earlier, but not aware of its similarity with Pandas 😊. Thank you for sharing .
This is a parent topic for students enrolled in the Google Data Analytics course. Please post your questions here, rather than in new topics.
Hello Kaggle community, I'm currently enrolled in Data Analytics program with Grow With Google. Is there a recommended site for practice questions and answers for R Programming beginners that one could spend time on? Thanks
Keras Conv2D Convo2D is a 2D Convolution Layer, this layer creates a convolution kernel that is wind with** layers input **which helps produce a tensor of outputs. Kernel: In image processing kernel is a convolution matrix or masks which can be used for ** blurring, sharpening, embossing, edge detection**, and more by doing a convolution between a *kernel and an image.* The Keras Conv2D class constructor has the following arguments: keras.layers.Conv2D(filters, kernel_size, strides=(, ), padding=, data_format=, dilation_rate=(, ), activation=, use_bias=, kernel_initializer=, bias_initializer=, kernel_regularizer=, bias_regularizer=, activity_regularizer=, kernel_constraint=, bias_constraint=) **Now let us examine each of these parameters individually: 1. filters Mandatory Conv2D parameter is the numbers of filters that convolutional layers will learn from. It is an integer value and also determines the number of output filters in the convolution. model.add(Conv2D(, (, ), padding=, activation=)) model.add(MaxPooling2D(pool_size=(, ))) Here we are learning a total of 32 filters and then we use Max Pooling to reduce the spatial dimensions of the output volume. As far as choosing the appropriate value for no. of filters, it is always recommended to use powers of 2 as the values. 2.kernel_size This parameter determines the dimensions of the kernel. Common dimensions include 1×1, 3×3, 5×5, and 7×7 which can be passed as (1, 1), (3, 3), (5, 5), or (7, 7) tuples. It is an integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. This parameter must be an odd integer. model.add(Conv2D(, (,), activation=)) 3.strides This parameter is an integer or tuple/list of 2 integers, specifying the “step” of the convolution along with the height and width of the input volume. Its default value is always set to (1, 1) which means that the given Conv2D filter is applied to the current location of the input volume and the given filter takes a 1-pixel step to the right and again the filter is applied to the input volume and it is performed until we reach the far right border of the volume in which we are moving our filter. model.add(Conv2D(, (, ), strides=(, ), activation=)) model.add(Conv2D(, (, ), strides=(, ), activation=)) 4.padding The padding parameter of the Keras Conv2D class can take one of two values: ‘valid’ or ‘same’. Setting the value to “valid” parameter means that the input volume is not zero-padded and the spatial dimensions are allowed to reduce via the natural application of convolution. model.add(Conv2D(, (, ), padding=)) You can instead preserve spatial dimensions of the volume such that the output volume size matches the input volume size, by setting the value to the “same”. model.add(Conv2D(, (, ), padding=)) 5.data_format This parameter of the Conv2D class can be either set to “channels_last” or “channels_first” value. The TensorFlow backend to Keras uses channels last ordering whereas the Theano backend uses channels first ordering. Usually we are not going to touch this value as Keras as most of the times we will be using TensorFlow backend to Keras. It defaults to the image_data_format value found in your Keras config file at ~/.keras/Keras.json. 6.dilation_rate The dilation_rate parameter of the Conv2D class is a 2-tuple of integers, which controls the dilation rate for dilated convolution. The Dilated Convolution is the basic convolution applied to the input volume with defined gaps. You may use this parameter when working with higher resolution images and fine-grained details are important to you or when you are constructing a network with fewer parameters. 7.activation The activation parameter to the Conv2D class is simply a convenience parameter which allows you to supply a string, which specifies the name of the activation function you want to apply after performing the convolution. model.add(Conv2D(, (, ), activation=)) OR model.add(Conv2D(, (, ))) model.add(Activation()) If you don’t specify anything, no activation is applied and won’t have an impact on the performance of your Convolutional Neural Network. 8.use_bias This parameter of the Conv2D class is used to determine whether a bias vector will be added to the convolutional layer. By default, its value is set as True. 9.kernel_initializer This parameter controls the initialization method which is used to initialize all the values in the Conv2D class before actually training the model. It is the initializer for the kernel weights matrix. 10.bias_initializer Whereas the bias_initializer controls how the bias vector is actually initialized before the training starts. It is the initializer for the bias vector. 11.kernel_regularizer, bias_regularizer and activity_regularizer kernel_regularizer is the Regularizer function which is applied to the kernel weights matrix. bias_regularizer is the Regularizer function which is applied to the bias vector. **activity_regularizer **is the Regularizer function which is applied to the output of the layer(i.e activation). Regularizations are techniques used to reduce the error by fitting a function appropriately on the given training set and avoid overfitting. It controls the type and amount of regularization method applied to the Conv2D layer. Using regularization is a must when working with large datasets and deep neural networks. Using regularization helps us to reduce the effects of overfitting and also to increase the ability of our model to generalize. There are two types of regularization: L1 and L2 regularization, both are used to reduce overfitting of our model. keras.regularizers l2 ... model.add(Conv2D(, (, ), activation=), kernel_regularizer=l2()) The value of regularization which you apply is the hyperparameter you will need to tune for your own dataset and its value usually ranges from 0.0001 to 0.001. It is always recommended to leave the bias_regularizer alone as it has very less impact on reducing the overfitting. It is also recommended to leave the activity_regularizer to its default value. 12.kernel_constraint and bias_constraint kernel_constraint is the Constraint function which is applied to the kernel matrix. bias_constraint is the Constraint function which is applied to the bias vector. A constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. These parameters allow you to impose constraints on the Conv2D layers. These parameters are usually left alone unless you have a specific reason to apply a constraint on the Con2D layers. source:[https://www.geeksforgeeks.org/keras-conv2d-class/]
Thanks for kindly explaining the principle of conv2d in detail. There is an interesting site about this. Please refer to http://playground.tensorflow.org
In this post, I summarize the different forms of cross validation, so you can appropriately validate your models. K-Fold Standard K-Fold is the simplest form of cross validation. This type of cross validation essentially create a train test split K times. You can specify K using the n_splits parameter. I recommend setting shuffle to True and setting a seed for the random_state. Shuffling the data may help you if your data has an inherit pattern between rows. Random states help your with reproducibility. Code Example: sklearn.model_selection KFold kf = KFold(n_splits=, random_state=, shuffle=) i, (train_index, test_index) (kf.split(X)): ... Stratified K-Fold This is a very useful type of cross validation for classification tasks It is basically the same as standard k-fold, expect it also preserves the percentage of samples for each class. This makes the distribution of the target variable more uniform across each fold. You will now pass in both your inputs (X) and targets (y) when splitting. Code Example: sklearn.model_selection StratifiedKFold skf = StratifiedKFold(n_splits=, random_state=, shuffle=) i, (train_index, test_index) (skf.split(X, y)): ... Group K-Fold This is a very useful form of cross validation that can help you avoid data leakage. Each fold will have non-overlapping groups. In other words, each group will appear exactly once in the test set across all folds. An example of a time you might use this is when you have a user_id column. Lets say a users appear in multiple samples in your data set. In this case, you should set the group column to be the user_id column so your model doesn't train and validate on the same user. Code Example: sklearn.model_selection GroupKFold gkf = GroupKFold(n_splits=) i, (train_index, test_index) (gkf.split(X, y, groups)): ... Sturge's Rule For Regression This is a useful trick to improve your cross validation for regression tasks. You are basically binning the target column then applying a stratified k-fold. Sturge's rule help you determine the number of bins Code Example: sklearn.model_selection StratifiedKFold num_bins = (np.floor( + np.log2((data)))) data.loc[:, ] = pd.cut(data[], bins=num_bins, labels=) kf = model_selection.StratifiedKFold(n_splits=) i, (train_index, test_index) (kf.split(X=data, y=data.bins.values)): ... Time Series Split Use this type of cross validation when dealing with time series data. This helps you avoid data leakage because with a normal k-fold split, your model could accidently see into the future when training a model. more info on time series cross validation Code Example: sklearn.model_selection TimeSeriesSplit tscv = TimeSeriesSplit(n_splits=) i, (train_index, test_index) (tscv.split(X)): ...
Thank you for explaining various kinds of K-Folds methods. Could you summarize the association of dataset types to be applied?
Hello Kagglers!👋 Finally, I'm now Triple Expert "3X" on kaggle ( Discussion, Notebooks and Datasets Expert)✌️✌️. I joined kaggle about two or more years ago, at first I didn't know how to use kaggle, but over time I did a search about kaggle, and then I used it to get a dataset to work on during my studies in the field of machine learning, but with time I learned that I can upload projects for machine learing and I did that and lately I got a lot of interest in kaggle, I made more machine learning projects and uploaded them on kaggle, there were a lot of data science and machine learning engineers supporting me and upvote on my notebooks and also They were giving me advice to become more effective and also to do more of my work. I thank them all for their support And the advice they give me in order to improve my business and to do more. But I'm looking forward to becoming a kaggle master and making more notebooks to help other than that, but with your support, thank you.❤️🤝 Thanks kaggle❤️✌️
Congratulations on becoming a Triple Expert on Kaggle.
I am currently doing my graduation Internship and I would to use this data for a predictive maintenance task. please could someone tells me what is the types of machines are included in this dataset (e.g. engines , batteries …. )
As far as I know, this is a synthetic/artificially generated data.
Hello Kagglers, I'd like to summarize my experience in this competition and share my experiments with you, hoping they might offer some insights. In this competition, you can generate endless training samples, so your computer's capabilities are the only limit. My desktop has one 3090, with 2TB SSD space, so I hope most of you can replicate my experiments. Here is what I did: Begin with the SDDB2m dataset and filter it based on prompt sentence embedding similarities. I removed image/prompt pairs with a embedding correlation of >0.95 to others, resulting in less than 500K images. Also download the 30K and 80K image datasets. Take out some (say 25K not highly correlated) images as the test set. Load pretrained openai clip models (or any other powerful enough model of your choice), unfreeze a few transformer layers, train a model (will share code later). Generate your own samples, using a model trained on sentence embeddings (from the test data) to predict the cosine similarity of predictions and actual data. This helps you decide which images to generate to focus on your main model's weak areas. Hard code a prompt generator (will share) and use that to generate prompt candidates. Fine tune a GPT2 model with prompts and use that as a prompt generator (will share). Filter the prompts from steps 5 and 6 based on predicted similarities (e.g., only include samples with <0.4 predicted similarities). Also, filter them based on embedding similarities between generated prompts and existing ones (e.g., >0.5) to make them more prompt-like. This way, you'll have samples that resemble prompts, but are challenging for the model to predict from images. Divide the generated images into train and test sets, expanding both datasets. Train the main model again, unfreezing more layers if necessary. Train a new similarity prediction model with the updated test data and model. Iterate (as many times as time allows, ^_^) steps 7, 8, and 9 to obtain more samples and improve model predictions. After a couple of iterations, my results met my expectations, so I believe this method can help improve your score gradually and steadily. Please feel free to ask questions if any step is unclear. Happy to take questions if any of the steps above is unclear. I won't submit any entries, and I'd love to see myself at the bottom of the leaderboard in May! Good luck to all! P.S.: I'll add code and data links later, so check back if you're interested! UPDATES: The main training script is updated here: https://www.kaggle.com/code/xiaozhouwang/model-training-script/notebook?scriptVersionId=123954941 The hard coded prompt generator: https://www.kaggle.com/code/xiaozhouwang/hard-coded-prompt-generator/notebook Some Images/Prompts generated by Fine Tuned GPT2 https://www.kaggle.com/datasets/xiaozhouwang/sd2gpt2 Some Images/Prompts generated with hard coded prompts https://www.kaggle.com/datasets/xiaozhouwang/sd2hardcode
Thanks for the sharing! It seems able to work until the competition ends.
How well can it differentiate between Spam messages and Advertisements
A good way to differentiate them is to check the sender’s identity, the message’s content, the message’s frequency, and the message’s legality before responding or taking any action.
How to improve accuracy?
Improving accuracy depends on the specific task and data you are working with. However, here are some general strategies that you can try to improve the accuracy of your model: Collect more data: One of the most effective ways to improve accuracy is to have more data. This can help your model learn more patterns and generalize better to new, unseen data. Feature engineering: Sometimes, the features in the dataset may not be sufficient to capture the underlying patterns in the data. In such cases, you can try creating new features or transforming existing features to better represent the data. Hyperparameter tuning: The performance of a machine learning model can be greatly influenced by the choice of hyperparameters. Regularization: Regularization techniques such as L1 and L2 regularization can help prevent overfitting and improve the generalization performance of the model. Ensemble methods: Ensemble methods such as bagging, boosting, and stacking can combine multiple models to improve accuracy. Preprocessing: Preprocessing the data can help remove noise, outliers, and other irrelevant information.. Model selection: Choosing the right model for the task at hand can greatly influence the accuracy. Transfer learning: Transfer learning involves using a pre-trained model on a related task and fine-tuning it for the specific task at hand. This can often lead to better accuracy, especially when the dataset is small. These are just some of the strategies you can try to improve the accuracy of your model. It's important to keep in mind that the effectiveness of these strategies can vary depending on the specific task and data you are working with. Therefore, it's important to experiment with different approaches to find the one that works best for your specific problem.
Hi there -- I'm able to download version 4 here: https://www.kaggle.com/models/google/movenet/frameworks/TfJs/variations/singlepose-lightning/versions/4 Can you say more about what steps you're taking and more details about the problem if possible? Is there a minimal code example you could provide?
I'm experimenting with the TensorFlow Lite Pose Estimation Android Demo available at: https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/android. I switched from the default model to the int8 version to leverage Hexagon acceleration, but the performance did not meet my expectations. When my classmates ran the app on 8gen3 under nnapi mode, the fps were lower compared to running it under GPU mode.Is this normal, or am I doing something wrong?
I'm experimenting with the TensorFlow Lite Pose Estimation Android Demo available at: https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/android. I switched from the default model to the int8 version to leverage Hexagon acceleration, but the performance did not meet my expectations. When my classmates ran the app on 8gen3 under nnapi mode, the fps were lower compared to running it under GPU mode.Is this normal, or am I doing something wrong?
By the way, the default model runs at approximately 15fps under nnapi mode, while the int8 model achieves around 26fps. In GPU mode, both models reach a maximum of 30fps.
Created notebook. Copied example of code, but getting error on the line: from dedal import infer # Requires google_research/google-research. Getting error: "ModuleNotFoundError: No module named 'dedal'" Would you be so kind to help ?
You have not installed the module properly. You can run pip install dedalus in your terminal.
Created notebook. Copied example of code, but getting error on the line: from dedal import infer # Requires google_research/google-research. Getting error: "ModuleNotFoundError: No module named 'dedal'" Would you be so kind to help ?
you probably haven't installed the module, try with !pip install (module name)
test33333333
testing testing
testing discussion section
test33333333
testing testing
reply forever?
testing discussion section
test222222
testing discussion section
test111111
reply forever?
how many depths?
testing testing
test reply
test33333333
post comment loading?
post comment loading?
submitting
Does this model account for sarcasm, or could it account for sarcastic comments?
Интересная тема для изучения! Мне она очень близка. Буду следить за развитием событий 👀🖖
Does this model account for sarcasm, or could it account for sarcastic comments?
This model does not explicitly account for sarcasm, as it is not trained on data that contains sarcastic labels or annotations. A more robust and reliable way to account for sarcasm would be to train the model on data that explicitly labels or annotates sarcastic comments, using human judgments or crowdsourcing techniques.
Does this model account for sarcasm, or could it account for sarcastic comments?
Great job 👏
Does this model account for sarcasm, or could it account for sarcastic comments?
It was a little hard to figure out how to add a discussion or comment to the model, but eventually I got there.
Does this model account for sarcasm, or could it account for sarcastic comments?
Nicely done!!
Can anyone give me pointer where to load EfficientNet b0 with noisy student version?
you can use the tf.keras.applications.EfficientNetB0 function to load the EfficientNet B0 model with weights pre-trained on imagenet. For example: tensorflow tf model = tf.keras.applications.EfficientNetB0(weights=)
try: timm model = timm.create_model(, features_only=, from_tf=, checkpoint_path=) This will tell timm to convert the TensorFlow checkpoint file to a PyTorch-compatible format.
Thanks, but I got the following error: Cell In[2], line 1 ----> 1 m = timm.create_model('tf_efficientnet_b7', features_only=True, from_tf=True, 2 checkpoint_path='/kaggle/input/tf-efficientnet/pytorch/tf-efficientnet-b7/1/tf_efficientnet_b7_ra-6c08e654.pth') File /opt/conda/lib/python3.10/site-packages/timm/models/factory.py:71, in create_model(model_name, pretrained, pretrained_cfg, checkpoint_path, scriptable, exportable, no_jit, **kwargs) 69 create_fn = model_entrypoint(model_name) 70 with set_layer_config(scriptable=scriptable, exportable=exportable, no_jit=no_jit): ---> 71 model = create_fn(pretrained=pretrained, pretrained_cfg=pretrained_cfg, **kwargs) 73 if checkpoint_path: 74 load_checkpoint(model, checkpoint_path) File /opt/conda/lib/python3.10/site-packages/timm/models/efficientnet.py:1816, in tf_efficientnet_b7(pretrained, **kwargs) 1814 kwargs['bn_eps'] = BN_EPS_TF_DEFAULT 1815 kwargs['pad_type'] = 'same' -> 1816 model = _gen_efficientnet( 1817 'tf_efficientnet_b7', channel_multiplier=2.0, depth_multiplier=3.1, pretrained=pretrained, **kwargs) 1818 return model File /opt/conda/lib/python3.10/site-packages/timm/models/efficientnet.py:880, in _gen_efficientnet(variant, channel_multiplier, depth_multiplier, channel_divisor, group_size, pretrained, **kwargs) 870 round_chs_fn = partial(round_channels, multiplier=channel_multiplier, divisor=channel_divisor) 871 model_kwargs = dict( 872 block_args=decode_arch_def(arch_def, depth_multiplier, group_size=group_size), 873 num_features=round_chs_fn(1280), (...) 878 **kwargs, 879 ) --> 880 model = _create_effnet(variant, pretrained, **model_kwargs) 881 return model File /opt/conda/lib/python3.10/site-packages/timm/models/efficientnet.py:629, in _create_effnet(variant, pretrained, **kwargs) 627 kwargs_filter = ('num_classes', 'num_features', 'head_conv', 'global_pool') 628 model_cls = EfficientNetFeatures --> 629 model = build_model_with_cfg( 630 model_cls, variant, pretrained, 631 pretrained_strict=not features_only, 632 kwargs_filter=kwargs_filter, 633 **kwargs) 634 if features_only: 635 model.default_cfg = pretrained_cfg_for_features(model.default_cfg) File /opt/conda/lib/python3.10/site-packages/timm/models/helpers.py:537, in build_model_with_cfg(model_cls, variant, pretrained, pretrained_cfg, model_cfg, feature_cfg, pretrained_strict, pretrained_filter_fn, pretrained_custom_load, kwargs_filter, **kwargs) 534 feature_cfg['out_indices'] = kwargs.pop('out_indices') 536 # Build the model --> 537 model = model_cls(**kwargs) if model_cfg is None else model_cls(cfg=model_cfg, **kwargs) 538 model.pretrained_cfg = pretrained_cfg 539 model.default_cfg = model.pretrained_cfg # alias for backwards compat TypeError: EfficientNetFeatures.__init__() got an unexpected keyword argument 'from_tf'
Thanks, but I got the following error: Cell In[2], line 1 ----> 1 m = timm.create_model('tf_efficientnet_b7', features_only=True, from_tf=True, 2 checkpoint_path='/kaggle/input/tf-efficientnet/pytorch/tf-efficientnet-b7/1/tf_efficientnet_b7_ra-6c08e654.pth') File /opt/conda/lib/python3.10/site-packages/timm/models/factory.py:71, in create_model(model_name, pretrained, pretrained_cfg, checkpoint_path, scriptable, exportable, no_jit, **kwargs) 69 create_fn = model_entrypoint(model_name) 70 with set_layer_config(scriptable=scriptable, exportable=exportable, no_jit=no_jit): ---> 71 model = create_fn(pretrained=pretrained, pretrained_cfg=pretrained_cfg, **kwargs) 73 if checkpoint_path: 74 load_checkpoint(model, checkpoint_path) File /opt/conda/lib/python3.10/site-packages/timm/models/efficientnet.py:1816, in tf_efficientnet_b7(pretrained, **kwargs) 1814 kwargs['bn_eps'] = BN_EPS_TF_DEFAULT 1815 kwargs['pad_type'] = 'same' -> 1816 model = _gen_efficientnet( 1817 'tf_efficientnet_b7', channel_multiplier=2.0, depth_multiplier=3.1, pretrained=pretrained, **kwargs) 1818 return model File /opt/conda/lib/python3.10/site-packages/timm/models/efficientnet.py:880, in _gen_efficientnet(variant, channel_multiplier, depth_multiplier, channel_divisor, group_size, pretrained, **kwargs) 870 round_chs_fn = partial(round_channels, multiplier=channel_multiplier, divisor=channel_divisor) 871 model_kwargs = dict( 872 block_args=decode_arch_def(arch_def, depth_multiplier, group_size=group_size), 873 num_features=round_chs_fn(1280), (...) 878 **kwargs, 879 ) --> 880 model = _create_effnet(variant, pretrained, **model_kwargs) 881 return model File /opt/conda/lib/python3.10/site-packages/timm/models/efficientnet.py:629, in _create_effnet(variant, pretrained, **kwargs) 627 kwargs_filter = ('num_classes', 'num_features', 'head_conv', 'global_pool') 628 model_cls = EfficientNetFeatures --> 629 model = build_model_with_cfg( 630 model_cls, variant, pretrained, 631 pretrained_strict=not features_only, 632 kwargs_filter=kwargs_filter, 633 **kwargs) 634 if features_only: 635 model.default_cfg = pretrained_cfg_for_features(model.default_cfg) File /opt/conda/lib/python3.10/site-packages/timm/models/helpers.py:537, in build_model_with_cfg(model_cls, variant, pretrained, pretrained_cfg, model_cfg, feature_cfg, pretrained_strict, pretrained_filter_fn, pretrained_custom_load, kwargs_filter, **kwargs) 534 feature_cfg['out_indices'] = kwargs.pop('out_indices') 536 # Build the model --> 537 model = model_cls(**kwargs) if model_cfg is None else model_cls(cfg=model_cfg, **kwargs) 538 model.pretrained_cfg = pretrained_cfg 539 model.default_cfg = model.pretrained_cfg # alias for backwards compat TypeError: EfficientNetFeatures.__init__() got an unexpected keyword argument 'from_tf'
maybe you are using an outdated version of the timm library that does not support loading models from TensorFlow checkpoints. You can try to update the library to the latest version using pip install -U timm
I got the following error when trying to load pth for offline notebook: 5 return timm.create_model('tf_efficientnet_b7', features_only = True, ----> 6 checkpoint_path = '/kaggle/input/tf-efficientnet/pytorch/tf-efficientnet-b7/1/tf_efficientnet_b7_ra-6c08e654.pth') /opt/conda/lib/python3.7/site-packages/timm/models/factory.py in create_model(model_name, pretrained, pretrained_cfg, checkpoint_path, scriptable, exportable, no_jit, **kwargs) 72 73 if checkpoint_path: ---> 74 load_checkpoint(model, checkpoint_path) 75 76 return model /opt/conda/lib/python3.7/site-packages/timm/models/helpers.py in load_checkpoint(model, checkpoint_path, use_ema, strict) 73 return 74 state_dict = load_state_dict(checkpoint_path, use_ema) ---> 75 incompatible_keys = model.load_state_dict(state_dict, strict=strict) 76 return incompatible_keys 77 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 1666 if len(error_msgs) > 0: 1667 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( -> 1668 self.__class__.__name__, "\n\t".join(error_msgs))) 1669 return _IncompatibleKeys(missing_keys, unexpected_keys) 1670 RuntimeError: Error(s) in loading state_dict for EfficientNetFeatures: Unexpected key(s) in state_dict: "conv_head.weight", "bn2.weight", "bn2.bias", "bn2.running_mean", "bn2.running_var", "bn2.num_batches_tracked", "classifier.weight", "classifier.bias".
try: timm model = timm.create_model(, features_only=, from_tf=, checkpoint_path=) This will tell timm to convert the TensorFlow checkpoint file to a PyTorch-compatible format.
maybe you are using an outdated version of the timm library that does not support loading models from TensorFlow checkpoints. You can try to update the library to the latest version using pip install -U timm
I'm using it from the Kaggle's default offline environment, so I cannot do pip install I guess? print(timm.__version__) 0.6.13
Hi! For KerasLayer to work, you need to add a signature specification (in this case, the correct signature is serving_default) and an output_key specification (output_0 or output_1, with the first being the logits and the second being the embedding). Unfortunately, the model is not set up for fine-tuning in the standard way (e.g. including it in a tf.keras.Sequential model) as it cannot have trainable=False set explicitly on it due to being ported from JAX. You'll need to take the outputs embeddings and train a smaller model to make that work. We'll be putting out a sample notebook outlining that approach soon.
Hi , Just to iterate, does this mean that there is no way to extract embeddings from the internal model layers ? Regards, Tzur
Hello! We train the model with binary cross entropy, so each class is independent - a no-bird example /should/ have low logits for all species. (but we'll be interested to hear if you observe otherwise.) We have mostly been using the model's embeddings more than the output probabilities, however: https://arxiv.org/abs/2307.06292
Thanks for answer! That is nice to hear. I have birdsounds, that i know is a certein typ of bird. However, the recording contains alot of data, and most of it is the no-bird case. I want to finetune the model with this data. How should i do that? How did you filter out the no-bird case?
Hello, I would like to do some experimentation on pytorch using theses datasets. However is there a way to reproduce the training/val set used for training the model shared in TF? Also how the 5 sec chunk are created? Is it center crop on timestamps where we know where a bird is or just every 5 sec a chunk is created ?
Hi, Shiro! We do all our evaluation on non-Xeno-Canto datasets (checking transfer to real-world soundscape problems) and so normally train on everything from XC.
Sure thing: We have some python tooling in the 'taxonomy' directory (with some documentation) here: https://github.com/google-research/perch/blob/main/chirp/taxonomy/README.md There's a Mapping from IOC (which is what Xeno Canto uses) to ebird2021 (ioc_12_2_to_ebird2021). It's stored in the taxonomy_database.json file, and you can use the namespace_db.load_db() tooling to load up the Mapping.
hi I was looking at the api and i found occassionnaly some species with similar name but i am not sure if there are from the same species or not. (I sometimes dont find them in ebird website). For example the scientific name Cnemotriccus sp.nov. vs Cnemotriccus fuscatus (english name : 'Fuscous Flycatcher sp.nov.' vs Fuscous Flycatcher ). Should they have the same ebird code? if yes why are they separated? Fuscous Flycatcher has the ebird : fusfly1 but havent found for the other one.
Hi! For KerasLayer to work, you need to add a signature specification (in this case, the correct signature is serving_default) and an output_key specification (output_0 or output_1, with the first being the logits and the second being the embedding). Unfortunately, the model is not set up for fine-tuning in the standard way (e.g. including it in a tf.keras.Sequential model) as it cannot have trainable=False set explicitly on it due to being ported from JAX. You'll need to take the outputs embeddings and train a smaller model to make that work. We'll be putting out a sample notebook outlining that approach soon.
not there yet?
The model card states (emphasis mine) The input to the model is a batch of 5-second audio segments, sampled at 32kHz. Can you provide an example of how to input a batch? This returns "ValueError: Python inputs incompatible with input_signature" on both versions of the model: model.infer_tf(np.zeros((4, 5 * 32000), dtype=np.float32))
Hello! We ran into some difficulties with the jax2tf conversion of polymorphic batch dimensions. Looking into a solution, but no promises in the short term.
hi I was looking at the api and i found occassionnaly some species with similar name but i am not sure if there are from the same species or not. (I sometimes dont find them in ebird website). For example the scientific name Cnemotriccus sp.nov. vs Cnemotriccus fuscatus (english name : 'Fuscous Flycatcher sp.nov.' vs Fuscous Flycatcher ). Should they have the same ebird code? if yes why are they separated? Fuscous Flycatcher has the ebird : fusfly1 but havent found for the other one.
Hi, Shiro! Our ebird codes ultimately come from the CSV's that you can download from ebird itself, here: https://ebird.org/news/2023-taxonomy-update The 'sp.nov' is likely what's referred to as an 'undescribed form' where someone thinks they found a new species and the Council of High Taxonomists has not yet rendered a decision… And sometimes things appear in one taxonomy but not the other. (Beware: the long tail of the taxonomy will drive you mad if you let it.)
Hello , I had a look at the dataset which the model has been trained. There is one whose birds code name seems different. Not sure how can I do a proper mapping with usual name we have in birdclef competition . https://zenodo.org/record/4656848#.ZDEgavZBw2x any idea where I can find the mapping ?
Hi, Shiro - There's some inherent difficulties mapping between Xeno Canto (which until recently was using IOC taxonomy v10.2) and the ebird/Clements taxonomy. The taxonomies include some real differences, and update yearly. We have some library code for handling conversions here: https://github.com/google-research/chirp/tree/main/chirp/taxonomy Which code are you having trouble with?
Hi, Shiro - There's some inherent difficulties mapping between Xeno Canto (which until recently was using IOC taxonomy v10.2) and the ebird/Clements taxonomy. The taxonomies include some real differences, and update yearly. We have some library code for handling conversions here: https://github.com/google-research/chirp/tree/main/chirp/taxonomy Which code are you having trouble with?
Sorry for the long delay, i am trying to look again into that as i want to do a side project about it. I found some class called LOWA, WOTH etc. Not sure where i can find the mapping . I did not find them in eBird_Taxonomy_v2021.csv and eBird_Taxonomy_v2022.csv