mclemcrew commited on
Commit
9554f73
1 Parent(s): b37e670

update readme with TODO values

Browse files

Added missing params and updated some spelling/grammar mistakes.

Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -13,10 +13,10 @@ size_categories:
13
  This dataset was created to help advance the field of intelligent music production, specifically targeting music mixing in a digital audio workstation (DAW).
14
 
15
  - *What (other) tasks could the dataset be used for? Are there obvious tasks for which it should not be used?*
16
- This dataset could possibly be used to predict parameter values via semantic labels provided by the mix listening evaluations.
17
 
18
  - *Has the dataset been used for any tasks already? If so, where are the results so others can compare (e.g., links to published papers)?*
19
- Currently this dataset is still being curated and has yet to be used for any task. This will be updated once that has changed.
20
 
21
  - *Who funded the creation of the dataset? If there is an associated grant, provide the grant number.*
22
  The National Science Foundation Graduate Research Fellowship Program (Award Abstract # 1650114) helped to financially support the creation of this dataset by helping financially support the creator through their graduate program.
@@ -25,7 +25,7 @@ The National Science Foundation Graduate Research Fellowship Program (Award Abst
25
 
26
  **Dataset Composition**
27
  - *What are the instances? (that is, examples; e.g., documents, images, people, countries) Are there multiple types of instances? (e.g., movies, users, ratings; people, interactions between them; nodes, edges)*
28
- The instances themselves are annotated of individual mixes either from Logic Pro, Pro Tools, or Reaper depending on the artist who mixed them.
29
 
30
  - *Are relationships between instances made explicit in the data (e.g., social network links, user/movie ratings, etc.)?*
31
  Each mix is unique to the other, and there exists no evident relationship between them.
@@ -37,7 +37,7 @@ There will be 114 mixes once this dataset is finalized.
37
  Each instance of a mix contains the following: Mix Name, Song Name, Artist Name, Genre, Tracks, Track Name, Track Type, Track Audio Path, Channel Mode, Parameters, Gain, Pan, (Etc)
38
 
39
  - *Is everything included or does the data rely on external resources? (e.g., websites, tweets, datasets) If external resources, a) are there guarantees that they will exist, and remain constant, over time; b) is there an official archival version. Are there licenses, fees or rights associated with any of the data?*
40
- The audio that is associated with each mix is an external resource as those audio files are original to their source. The sources are either from The Mixing Secrets, Weathervane, or TODO.
41
 
42
  - *Are there recommended data splits or evaluation measures? (e.g., training, development, testing; accuracy/AUC)*
43
  There are no data splits recommended for this. However, suppose no listening evaluation is available for that current mix. In that case, we recommend leaving out that mix if you plan on using those comments for the semantic representation of the mix. All of the mixes that were annotated from Mike Senior's The Mixing Secret projects for sound on sound do not contain any listening evaluation.
@@ -55,7 +55,7 @@ The data was collected manually by annotating parameter values for each track in
55
  The author of this dataset collected the data and is a graduate student at the University of Utah.
56
 
57
  - *Over what time-frame was the data collected? Does the collection time-frame match the creation time-frame?*
58
- This dataset has been collected from September through November for 2023. The creation time-frame overlaps the collection time-frame as the main structure for the dataset was created, and mixes are added iteratively.
59
 
60
  - *How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part of speech tags; model-based guesses for age or language)? If the latter two, were they validated/verified and if so how?*
61
  The data were directly associated with each instance. The parameter values are visually represented in each session file for the mixes.
@@ -64,10 +64,10 @@ The data were directly associated with each instance. The parameter values are
64
  The dataset contains all possible instances that were given by The Mix Evaluation Dataset, negating the copyrighted songs that were used in the listening evaluation.
65
 
66
  - *If the dataset is a sample, then what is the population? What was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Is the sample representative of the larger set (e.g., geographic coverage)? If not, why not (e.g., to cover a more diverse range of instances)? How does this affect possible uses?*
67
- This dataset does not represent a sample of a larger population and thus a sample size is not appropriate in this case.
68
 
69
  - *Is there information missing from the dataset and why? (this does not include intentionally dropped instances; it might include, e.g., redacted text, withheld documents) Is this data missing because it was unavailable?*
70
- Not all of the parameter values for every plugin used was documented. Occasionally a mix would include a saturator or a multiband compressor. Due to the low occurrence of these plugins, these were omitted for the annotating process.
71
 
72
  - *Are there any known errors, sources of noise, or redundancies in the data?*
73
  To the author's knowledge, there are no errors or sources of noise within this dataset.
@@ -82,7 +82,7 @@ The data preprocessing happened during the data collection stage for this datase
82
  The raw data is still saved in the project files but was not annotated and, therefore, is not contained in this dataset. For the raw files of each mix, the reader should explore The Mix Evaluation dataset for these values.
83
 
84
  - *Is the preprocessing software available?*
85
- The tool that was used to help the author annotate some of the parameter values is available for download here: [TODO]
86
 
87
  - *Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?*
88
  The authors of this dataset intended to create an ethical source repository for AI music researchers to use for music mixing. We believe by using The Mix Evaluation dataset along with publically available music mixing projects, we have achieved our goal. Although this dataset is considerably smaller than what is required for most model architectures utilized in generative AI applications, we hope this is a positive addition to the field.
@@ -109,16 +109,16 @@ There are no fees or access/export restrictions for this dataset.
109
  HuggingFace is currently hosting the dataset and Michael Clemens (email: michael.clemens at utah.edu) is maintaining the dataset.
110
 
111
  - *Will the dataset be updated? How often and by whom? How will updates/revisions be documented and communicated (e.g., mailing list, GitHub)? Is there an erratum?*
112
- The release of this dataset is set to be ***December 5th, 2023***. Updates and reivisions will be documented through the repository through HuggingFace. There is currently no erratum, but should that be the case, this will be documented here as they come about.
113
 
114
  - *If the dataset becomes obsolete how will this be communicated?*
115
  Should the dataset no longer be valid, this will be communicated through the ReadMe right here on HF.
116
 
117
  - *Is there a repository to link to any/all papers/systems that use this dataset?*
118
- There is no repo or link to any paper/systems that use the dataset. Should this dataset be used in the future for papers or system design, there will be a link to these works on this ReadMe or a website will be created and linked here for the collection of works.
119
 
120
  - *If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? If so, is there a process for tracking/assessing the quality of those contributions. What is the process for communicating/distributing these contributions to users?*
121
- This dataset itself is an extension of The Mix Evaluation Dataset by Brecht De Man et al. and users are free to extend/augment/build on this dataset. There is no trackable way currently of assessing these contributions.
122
  -
123
  - *Any other comments?*
124
 
@@ -130,13 +130,13 @@ As this was a derivative of another work that performed the main data collection
130
  N/A
131
 
132
  - *If it relates to people, were there any ethical review applications/reviews/approvals? (e.g. Institutional Review Board applications)*
133
- As this is an extention of the main dataset by Brecht De Man et al. and the data collection had already been conducted, an IRB was not included in this creation of this dataset. The data themselves are not related to the music producers but instead remain as an artifact of their work. Due to the nature of these data, an IRB was not needed.
134
 
135
  - *If it relates to people, were they told what the dataset would be used for and did they consent? What community norms exist for data collected from human communications? If consent was obtained, how? Were the people provided with any mechanism to revoke their consent in the future or for certain uses?*
136
  N/A
137
 
138
  - *If it relates to people, could this dataset expose people to harm or legal action? (e.g., financial social or otherwise) What was done to mitigate or reduce the potential for harm?*
139
- The main initiative of this work was to create a dataset that was ethically sourced for parameter recommendations in the music mixing process. With this, all of the data found here has been gathered from publically avaiable data from artists. Therefore no copyright or fair use infringement exists.
140
 
141
  - *If it relates to people, does it unfairly advantage or disadvantage a particular social group? In what ways? How was this mitigated? If it relates to people, were they provided with privacy guarantees? If so, what guarantees and how are these ensured?*
142
  N/A
 
13
  This dataset was created to help advance the field of intelligent music production, specifically targeting music mixing in a digital audio workstation (DAW).
14
 
15
  - *What (other) tasks could the dataset be used for? Are there obvious tasks for which it should not be used?*
16
+ This dataset could possibly be used to predict parameter values via semantic labels provided by the mixed listening evaluations.
17
 
18
  - *Has the dataset been used for any tasks already? If so, where are the results so others can compare (e.g., links to published papers)?*
19
+ Currently, this dataset is still being curated and has yet to be used for any task. This will be updated once that has changed.
20
 
21
  - *Who funded the creation of the dataset? If there is an associated grant, provide the grant number.*
22
  The National Science Foundation Graduate Research Fellowship Program (Award Abstract # 1650114) helped to financially support the creation of this dataset by helping financially support the creator through their graduate program.
 
25
 
26
  **Dataset Composition**
27
  - *What are the instances? (that is, examples; e.g., documents, images, people, countries) Are there multiple types of instances? (e.g., movies, users, ratings; people, interactions between them; nodes, edges)*
28
+ The instances themselves are annotated of individual mixes either from Logic Pro, Pro Tools, or Reaper, depending on the artist who mixed them.
29
 
30
  - *Are relationships between instances made explicit in the data (e.g., social network links, user/movie ratings, etc.)?*
31
  Each mix is unique to the other, and there exists no evident relationship between them.
 
37
  Each instance of a mix contains the following: Mix Name, Song Name, Artist Name, Genre, Tracks, Track Name, Track Type, Track Audio Path, Channel Mode, Parameters, Gain, Pan, (Etc)
38
 
39
  - *Is everything included or does the data rely on external resources? (e.g., websites, tweets, datasets) If external resources, a) are there guarantees that they will exist, and remain constant, over time; b) is there an official archival version. Are there licenses, fees or rights associated with any of the data?*
40
+ The audio that is associated with each mix is an external resource, as those audio files are original to their source. The original audio sources are from The Mixing Secrets, Weathervane, or The Open Multitrack Testbed.
41
 
42
  - *Are there recommended data splits or evaluation measures? (e.g., training, development, testing; accuracy/AUC)*
43
  There are no data splits recommended for this. However, suppose no listening evaluation is available for that current mix. In that case, we recommend leaving out that mix if you plan on using those comments for the semantic representation of the mix. All of the mixes that were annotated from Mike Senior's The Mixing Secret projects for sound on sound do not contain any listening evaluation.
 
55
  The author of this dataset collected the data and is a graduate student at the University of Utah.
56
 
57
  - *Over what time-frame was the data collected? Does the collection time-frame match the creation time-frame?*
58
+ This dataset has been collected from September through November of 2023. The creation time frame overlaps the collection time frame as the main structure for the dataset was created, and mixes are added iteratively.
59
 
60
  - *How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part of speech tags; model-based guesses for age or language)? If the latter two, were they validated/verified and if so how?*
61
  The data were directly associated with each instance. The parameter values are visually represented in each session file for the mixes.
 
64
  The dataset contains all possible instances that were given by The Mix Evaluation Dataset, negating the copyrighted songs that were used in the listening evaluation.
65
 
66
  - *If the dataset is a sample, then what is the population? What was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Is the sample representative of the larger set (e.g., geographic coverage)? If not, why not (e.g., to cover a more diverse range of instances)? How does this affect possible uses?*
67
+ This dataset does not represent a sample of a larger population and thus, a sample size is not appropriate in this case.
68
 
69
  - *Is there information missing from the dataset and why? (this does not include intentionally dropped instances; it might include, e.g., redacted text, withheld documents) Is this data missing because it was unavailable?*
70
+ Not all of the parameter values for every plugin used were documented. Occasionally a mix would include a saturator or a multiband compressor. Due to the low occurrence of these plugins, these were omitted for the annotating process.
71
 
72
  - *Are there any known errors, sources of noise, or redundancies in the data?*
73
  To the author's knowledge, there are no errors or sources of noise within this dataset.
 
82
  The raw data is still saved in the project files but was not annotated and, therefore, is not contained in this dataset. For the raw files of each mix, the reader should explore The Mix Evaluation dataset for these values.
83
 
84
  - *Is the preprocessing software available?*
85
+ The tool that was used to help the author annotate some of the parameter values is available for download [here](https://github.com/mclemcrew/MixologyDB)
86
 
87
  - *Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?*
88
  The authors of this dataset intended to create an ethical source repository for AI music researchers to use for music mixing. We believe by using The Mix Evaluation dataset along with publically available music mixing projects, we have achieved our goal. Although this dataset is considerably smaller than what is required for most model architectures utilized in generative AI applications, we hope this is a positive addition to the field.
 
109
  HuggingFace is currently hosting the dataset and Michael Clemens (email: michael.clemens at utah.edu) is maintaining the dataset.
110
 
111
  - *Will the dataset be updated? How often and by whom? How will updates/revisions be documented and communicated (e.g., mailing list, GitHub)? Is there an erratum?*
112
+ The release of this dataset is set to be ***December 5th, 2023***. Updates and revisions will be documented through the repository through HuggingFace. There is currently no erratum, but should that be the case, this will be documented here as they come about.
113
 
114
  - *If the dataset becomes obsolete how will this be communicated?*
115
  Should the dataset no longer be valid, this will be communicated through the ReadMe right here on HF.
116
 
117
  - *Is there a repository to link to any/all papers/systems that use this dataset?*
118
+ There is no repo or link to any paper/systems that use the dataset. Should this dataset be used in the future for papers or system design, there will be a link to these works on this ReadMe, or a website will be created and linked here for the collection of works.
119
 
120
  - *If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? If so, is there a process for tracking/assessing the quality of those contributions. What is the process for communicating/distributing these contributions to users?*
121
+ This dataset is an extension of The Mix Evaluation Dataset by Brecht De Man et al., and users are free to extend/augment/build on this dataset. There is no trackable way currently of assessing these contributions.
122
  -
123
  - *Any other comments?*
124
 
 
130
  N/A
131
 
132
  - *If it relates to people, were there any ethical review applications/reviews/approvals? (e.g. Institutional Review Board applications)*
133
+ As this is an extension of the main dataset by Brecht De Man et al. and the data collection had already been conducted, an IRB was not included in this creation of this dataset. The data themselves are not related to the music producers but instead remain as an artifact of their work. Due to the nature of these data, an IRB was not needed.
134
 
135
  - *If it relates to people, were they told what the dataset would be used for and did they consent? What community norms exist for data collected from human communications? If consent was obtained, how? Were the people provided with any mechanism to revoke their consent in the future or for certain uses?*
136
  N/A
137
 
138
  - *If it relates to people, could this dataset expose people to harm or legal action? (e.g., financial social or otherwise) What was done to mitigate or reduce the potential for harm?*
139
+ The main initiative of this work was to create a dataset that was ethically sourced for parameter recommendations in the music-mixing process. With this, all of the data found here has been gathered from publically avaiable data from artists. Therefore no copyright or fair use infringement exists.
140
 
141
  - *If it relates to people, does it unfairly advantage or disadvantage a particular social group? In what ways? How was this mitigated? If it relates to people, were they provided with privacy guarantees? If so, what guarantees and how are these ensured?*
142
  N/A