geonmin-kim's picture
Upload folder using huggingface_hub
d6585f5
raw history blame
No virus
32.2 kB
์งˆ์˜,๊ฒ€์ƒ‰ ๊ฒฐ๊ณผ,flag,username,timestamp
"๋ชธ์— ๋ฐ˜์ ์ด ์žˆ๋‹ค๋ฉด ์–ด๋–ค ์งˆ๋ณ‘์ผ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ์„๊นŒ?
","๋ฌธ์„œ 1
์ •ํ™•ํ•œ ์›์ธ์€ ๋ฐํ˜€์ ธ ์žˆ์ง€ ์•Š์œผ๋‚˜ ์œ ์ „์ , ํ™˜๊ฒฝ์  ์š”์ธ์ด ๋ณตํ•ฉ๋˜์–ด ์ž‘์šฉํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ง์ž‘๋œ๋‹ค. ์ด ์งˆ๋ณ‘์„ ์•“๋Š” ๊ฐ€์กฑ ๊ตฌ์„ฑ์›์ด ์žˆ๋‹ค๋ฉด ๋ณธ์ธ์—๊ฒŒ๋„ ์˜ํ–ฅ์„ ๋ฐ›์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค. ์˜ํ–ฅ์„ ๋ฐ›์€ ์ผ๋ž€์„ฑ ์Œ๋‘ฅ์ด๊ฐ€ ์žˆ๋‹ค๋ฉด 30%์˜ ํ™•๋ฅ ๋กœ ๋‹ค๋ฅธ ์Œ๋‘ฅ์ด๋„ ์ด ์งˆ๋ณ‘์„ ๊ฐ€์ง€๊ณ  ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค. ์ด ์งˆ๋ณ‘์€ ์ŠคํŠธ๋ ˆ์Šค, ๊ฐ์—ผ, ์ถœ์ƒ ์‹œ์— ๋ฐœ๋ณ‘ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ œ1ํ˜• ๋‹น๋‡จ๋ณ‘, ๋ฅ˜๋งˆํ‹ฐ์Šค ๊ด€์ ˆ์—ผ๊ณผ ๊ฐ™์€ ์ž๊ฐ€๋ฉด์—ญ ์งˆํ™˜์„ ๊ฐ€์ง„ ํ™˜์ž๋“ค์ด ์˜ํ–ฅ์„ ๋ฐ›์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค. ํก์—ฐ์€ ์ด ์งˆ๋ณ‘์˜ ์œ„ํ—˜์„ฑ์„ ๋†’์ด๋ฉฐ ์•ˆ๊ตฌ ๋ฌธ์ œ๋ฅผ ๋” ์•…ํ™”์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ์ด ์งˆ๋ณ‘์€ TSI๋ผ๋Š” ํ•ญ์ฒด์—์„œ ๋น„๋กฏํ•˜๋ฉฐ ์ด๋Š” ๊ฐ‘์ƒ์ƒ˜ ์ž๊ทน ํ˜ธ๋ฅด๋ชฌ๊ณผ ์˜ํ–ฅ๋„๊ฐ€ ๋น„์Šทํ•˜๋‹ค. ์ด๋Ÿฌํ•œ ํ•ญ์ฒด๋“ค์€ ๊ฐ‘์ƒ์ƒ˜์ด ๊ฐ‘์ƒ์ƒ˜ ํ˜ธ๋ฅด๋ชฌ์„ ๊ณผ๋„ํ•˜๊ฒŒ ์ƒ์‚ฐํ•˜๋„๋ก ์œ ๋„ํ•œ๋‹ค. ์งˆ๋ณ‘ ํ™•์ธ์„ ์œ„ํ•œ ํ˜ˆ์•ก ๊ฒ€์‚ฌ, ๋ฐฉ์‚ฌ์„ฑ ์š”์˜ค๋“œ ์„ญ์ทจ๋ฅผ ํ†ตํ•œ ์ฆ์ƒ์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ์ง„๋‹จํ•œ๋‹ค.
๋ฌธ์„œ 2
1603๋…„ 37์„ธ์˜ ๋‚˜์ด๋กœ ์‚ฌ๋งํ•˜์˜€๊ณ , ์•„๋“ค ์•„ํ๋ฉ”ํŠธ 1์„ธ๊ฐ€ ์™•์œ„๋ฅผ ์ด์–ด๋ฐ›์•˜๋‹ค. ์‚ฌ๋ง ์›์ธ์€ ์ •ํ™•ํ•˜๊ฒŒ ์•Œ๋ ค์ ธ ์žˆ์ง€ ์•Š์œผ๋ฉฐ, ์ž์—ฐ์‚ฌ๋‚˜ ์งˆ๋ณ‘์ผ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค.
๋ฌธ์„œ 3
์ด์ƒ์˜ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋“ค์€ ์˜ค์ง ๊ฐ€์Šค ํ–‰์„ฑ ๋ฐ ๊ฐˆ์ƒ‰ ์™œ์„ฑ์ด ์กด์žฌํ•˜์ง€ ์•Š์Œ์„ ๋ฐํ˜€์คฌ์„ ๋ฟ ์ง€๊ตฌ๋‚˜ ๊ธˆ์„ฑ์ฒ˜๋Ÿผ ์ž‘์€ ํ–‰์„ฑ์ด ์กด์žฌํ•  ๊ฐ€๋Šฅ์„ฑ์€ ์•„์ง ๋‚จ์•„ ์žˆ๋‹ค. ๋งŒ์•ฝ ๋œจ๊ฑฐ์šด ๋ชฉ์„ฑ์ด ํ•ญ์„ฑ ๊ฐ€๊นŒ์ด ์žˆ์—ˆ๋‹ค๋ฉด ์ด๋“ค์€ ํ•ญ์„ฑ ๊ทผ์ฒ˜๋ฅผ ๋Œ๋˜ ์ง€๊ตฌํ˜• ํ–‰์„ฑ์˜ ๊ถค๋„๋ฅผ ๋ง๊ฐ€๋œจ๋ ค ์ƒ๋ช…์ฒด ๋ฐœ์ƒ ๊ฐ€๋Šฅ์„ฑ์„ ๋‚ฎ์ท„์„ ๊ฒƒ์ด๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ€์Šค ํ–‰์„ฑ์ด ์—†๋‹ค๋Š” ๊ฒƒ์€ ์ง€๊ตฌ ๋น„์Šทํ•œ ํ–‰์„ฑ์ด ์กด์žฌํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๋†’์—ฌ ์ฃผ๋Š” ์‚ฌ์‹ค์ด ๋œ๋‹ค. ํ†ต์ƒ์ ์œผ๋กœ ๋ฐํ˜€์ง„ ์—ฐ๊ตฌ์— ๋”ฐ๋ฅด๋ฉด ์ค‘์›์†Œ ํ•จ๋Ÿ‰์ด ๋†’์€ ๋ณ„ ์ฃผ์œ„์—๋Š” ํ–‰์„ฑ์ด ์žˆ์„ ํ™•๋ฅ ์ด ๋†’๊ณ  ๊ทธ๋ ‡์ง€ ์•Š์€ ๋ณ„ ์ฃผ์œ„์—๋Š” ํ–‰์„ฑ์ด ์žˆ์„ ํ™•๋ฅ ์ด ์ ์€ ๊ฒƒ์œผ๋กœ ๋ฐํ˜€์กŒ๋‹ค. ๋‘๊บผ์šด ๋จผ์ง€ ์›๋ฐ˜์ด ์žˆ๋‹ค๋Š” ์‚ฌ์‹ค์€ ํ•ญ์„ฑ ๊ฐ€๊นŒ์ด์— ์•”์„ ํ–‰์„ฑ์ด ์กด์žฌํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๋†’์ด๋Š” ๊ฒƒ์€ ์‚ฌ์‹ค์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Š” ๋™์‹œ์— ๊ทธ ์•”์„ ํ–‰์„ฑ์ด ํญ๊ฒฉ์„ ๋ฐ›์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Œ์„ ์•Œ๋ ค์ฃผ๋Š” ์‚ฌ์‹ค์ด๊ธฐ๋„ ํ•˜๋‹ค. ๋งŒ์•ฝ ํ–‰์„ฑ์ด ๋ฐœ๊ฒฌ๋œ๋‹ค๋ฉด ์ดํ›„์˜ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์€ ์ด ํ–‰์„ฑ์— ์ƒ๋ช…์ฒด๊ฐ€ ์‚ด ์ˆ˜ ์žˆ๋Š” ๋ฌผ๊ณผ ๋Œ€๊ธฐ๊ฐ€ ์กด์žฌํ•˜๋Š”์ง€๋ฅผ ์ถฉ๋ถ„ํ•œ ํ•ด์ƒ๋„์˜ ๋ง์›๊ฒฝ์„ ํ†ตํ•ด ์•Œ์•„๋‚ด๋Š” ๊ฒƒ์ด ๋œ๋‹ค. ์ง€๊ตฌ์˜ ์‚ฐ์†Œ๊ฐ€ ์ƒ๋ช…์ฒด ์กด์žฌ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ฒ™๋„๊ฐ€ ๋˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ฌด๊ธฐ์„ฑ์˜ ๋Œ€๊ธฐ ์กฐ์„ฑ์€ ์›์‹œ ์ƒ๋ช…์ฒด๊ฐ€ ์กด์žฌํ•จ์„ ๋‚˜ํƒ€๋‚ด๋Š” ์ง•ํ‘œ๊ฐ€ ๋  ์ˆ˜ ์žˆ๋‹ค.
๋ฌธ์„œ 4
์น˜๋งค๋‚˜ ๋ฐฑํ˜ˆ๋ณ‘, ๋‹น๋‡จ, ํŒŒํ‚จ์Šจ๋ณ‘๊ณผ ๊ฐ™์€ ๋‚œ์น˜๋ณ‘๋“ค ์ค‘์—๋Š” ์„ธํฌ์˜ ๋ณ€์ด๋‚˜ ์‚ฌ๋ฉธ๋กœ ์ธํ•œ ์งˆ๋ณ‘์ด ๋Œ€๋‹ค์ˆ˜์ด๋‹ค. ์ด๋Ÿฌํ•œ ํ‡ดํ–‰์„ฑ ์งˆ๋ณ‘์˜ ๊ฒฝ์šฐ ์„ธํฌ ์น˜๋ฃŒ๋ฒ•์„ ์ด์šฉํ•ด์—ฌ ์น˜๋ฃŒํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. ํŠน์ด์ ์ฃผ์˜์ž๋“ค์— ๋”ฐ๋ฅด๋ฉด ์ค„๊ธฐ์„ธํฌ ์—ฐ๊ตฌ์™€ ๊ฐ™์€ ์„ธํฌ ์—ฐ๊ตฌ๋Š” ์ƒ๋ช… ๊ณตํ•™ ์—ฐ๊ตฌ์˜ ์ผ๋ถ€๋ถ„์ด๋ฉฐ ์œ ์ „์ž DNA ์ง€๋„๋ฅผ ์™„๋ฒฝํ•˜๊ฒŒ ๊ตฌ์กฐํ™”ํ•  ์ˆ˜ ์žˆ๋‹ค๋ฉด ์„ธํฌ๋ถ„ํ™” ์น˜๋ฃŒ ํ˜น์€ ์„ธํฌ๋ณต์ œ ์น˜๋ฃŒ๋ฅผ ํ†ตํ•ด ํ™˜์ž ์ž์‹ ์˜ DNA๋ฅผ ์ง€๋‹ˆ๊ณ  ํ…”๋กœ๋ฏธ์–ด๊ฐ€ ์—ฐ์žฅ๋œ ์„ธํฌ๋ฅผ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋ผ๊ณ  ๋ณธ๋‹ค. ์˜ˆ์ปจ๋ฐ ํ˜„์žฌ ๋‹น๋‡จ๋ณ‘ ์น˜๋ฃŒ์— ์“ฐ์ด๋Š” ๊ฑฐ๋ถ€๋ฐ˜์‘ ์ œ์–ด์ œ๊ฐ€ ์œ„ํ—˜ํ•œ ๋ถ€์ž‘์šฉ์„ ์ผ์œผํ‚ฌ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋Š” ๋ฐ˜๋ฉด ์ด๋Ÿฌํ•œ ์„ธํฌ ์น˜๋ฃŒ๋Š” ๋ถ€์ž‘์šฉ ๊ฐ€๋Šฅ์„ฑ์„ ๊ธ‰๊ฒฉํžˆ ๋‚ฎ์ถœ ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์„ธํฌ ์น˜๋ฃŒ๋Š” ๋‹จ์ˆœํžˆ ๋‚œ์น˜๋ณ‘์˜ ์น˜๋ฃŒ์—๋งŒ ๊ทธ์น˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ, ๊ต์ฐจ๋ถ„ํ™” ๊ธฐ์ˆ ์„ ์ด์šฉํ•œ ์ƒˆ๋กœ์šด ์‹ ์ฒด ๊ธฐ๊ด€์œผ๋กœ์˜ ๊ต์ฒด๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ๋‹ค.
๋ฌธ์„œ 5
์ปดํ“จํ„ฐ์— ์˜ํ•œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์—๋Š”, ๋ณด์ด๋“œ๋กœ ๋ถˆ๋ฆฌ๋Š” ํ‹ˆ์ƒˆ๋‚˜ ๋ฒ„๊ทธ๊ฐ€ ์žˆ๊ณ , ๋‚ด๋ถ€๋กœ๋ถ€ํ„ฐ๋„ ์•„๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์„ ์ง€๋„ ๋ชจ๋ฅด๋‹ค. ๊ทธ๋Ÿฌํ•œ ๊ฒƒ์„ ์ฐพ์•„๋‚ด ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ๋‹ค๋ฉด, ๊ฑฐ๊ธฐ์— ๋”ฐ๋ผ ๋ชจ์˜ํ˜„์‹ค์˜ ๋‚ด๋ถ€์— ์žˆ๋Š” ๊ฒƒ์„ ์ฆ๋ช…ํ•  ์ˆ˜ ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ๋ฌผ๋ฆฌ ๋ฒ•์น™์— ๋ฐ˜ํ•˜๋Š” ์ผ์€, ๊ทธ ๋ฐ–์—๋„ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€์„ค์„ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ๋‹ค(์‹  ๋“ฑ). ์˜ํ™” ใ€Š๋งคํŠธ๋ฆญ์Šคใ€‹๋กœ ๊ทธ๋ ค์ง„ ๊ฒƒ์ฒ˜๋Ÿผ, ๊ธฐ์‹œ๊ฐ ๋“ฑ์˜ ์ผ์ƒ์ ์ธ ๊ธฐ๋ฌ˜ํ•œ ์ฒดํ—˜๋„ ์–ด๋– ํ•œ ๋ฒ„๊ทธ๋กœ์„œ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค.
๋ฌธ์„œ 6
์˜ˆ๋ฅผ ๋“ค๋ฉด ๋Œ€๊ธฐ๊ฐ€ ์—†๋Š” ์–ด๋–ค ํ–‰์„ฑ ๊นŠ์€ ๋‹จ์ธต ์† ๊ทธ๋Š˜์ง„ ๊ณณ์ด๋‚˜ ํ™”์‚ฐ ๋™๊ตด ์†์— ์ƒํƒœ๊ณ„๊ฐ€ ํ˜•์„ฑ๋˜์–ด ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค. ์ €๋ช…ํ–ˆ๋˜ ์ฒœ๋ฌธํ•™์ž ์นผ ์„ธ์ด๊ฑด์€ ์ด ๊ฐœ๋…์„ ํƒœ์–‘๊ณ„์— ์ ์šฉํ•˜์—ฌ, 1976๋…„ ๋…ผ๋ฌธ์„ ํ†ตํ•ด ๋ชฉ์„ฑ์˜ ๋Œ€๊ธฐ ์ƒ์ธต๋ถ€์— ์œ ๊ธฐ์ฒด๊ฐ€ ์‚ด๊ณ  ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์„ ํ”ผ๋ ฅํ–ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชฉ์„ฑ์—๋Š” ๋”ฑ๋”ฑํ•œ ํ‘œ๋ฉด์ด ์—†๊ธฐ ๋•Œ๋ฌธ์— ์ƒ๋ช…์ฒด๊ฐ€ ์กด์žฌํ•  ๊ฐ€๋Šฅ์„ฑ์€ ๊ฑฐ์˜ ์—†๋‹ค.
๋ฌธ์„œ 7
๋„๋ฆฌ ์•Œ๋ ค์ ธ ์žˆ๋‹ค๋Š” ์‚ฌ์‹ค์ด ๋ฐ˜๋“œ์‹œ ์ฐธ์ž„์„ ๋ณด์ฆํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋ฏ€๋กœ ์ด๋Ÿฐ ์ฃผ์žฅ์€ ๋…ผ๋ฆฌ์ ์œผ๋กœ ์˜ค๋ฅ˜์ด๋‹ค. ๊ฐœ์ธ์˜ ์‹ ๋…์ด ์ž˜๋ชป๋˜์–ด ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค๋ฉด ๋‹ค์ˆ˜์˜ ์ธ๊ฐ„์˜ ์‹ ๋…๋„ ์ž˜๋ชป๋˜์–ด ์žˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, 75%์˜ ์‚ฌ๋žŒ์ด A๋ผ๊ณ  ๋Œ€๋‹ตํ–ˆ๋‹ค๊ณ  ํ•ด๋„ 25%์˜ ํ™•๋ฅ ๋กœ A๊ฐ€ ์•„๋‹ ๊ฐ€๋Šฅ์„ฑ๋„ ์žˆ๋‹ค. ์ด ํ™•๋ฅ ์ด ์–ด๋–ป๊ฒŒ ๋˜๋“  ๋‹ค์ˆ˜๊ฐ€ ์˜ณ๋‹ค๋Š” ๊ฒƒ์€ ๋…ผ๋ฆฌ์ ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์—†๋‹ค. ๋งŒ์•ฝ ๋งŒ์žฅ์ผ์น˜๋ผ๊ณ  ํ•ด๋„, ํ‘œ๋ณธ์˜ ์ˆ˜๊ฐ€ ๋ถˆ์ถฉ๋ถ„ํ• ์ง€๋„ ๋ชจ๋ฅด๊ณ , ํ˜น์€ ๊ทธ ์‚ฌ๋žŒ๋“ค์ด ๋ชจ๋ฅด๋Š” ์‚ฌ์‹ค์ด ์กด์žฌํ•˜๊ณ  ์žˆ์–ด์„œ ๊ทธ ์‚ฌ์‹ค์„ ์•Œ๋ฉด ๊ฒฐ๊ณผ๊ฐ€ ๋ฐ”๋€”์ง€๋„ ๋ชจ๋ฅธ๋‹ค.
๋ฌธ์„œ 8
""(a and b)""์™€ ๊ฐ™์€ ๋…ผ๋ฆฌ์‹์„ ๊ณ„์‚ฐํ•œ๋‹ค๊ณ  ํ•˜๋ฉด ""a""ํ•ญ์ด ๊ฑฐ์ง“์ธ ๊ฒฝ์šฐ์—, ""b""ํ•ญ์„ ๊ณ„์‚ฐํ•˜์ง€ ์•Š์•„๋„ ์ „์ฒด ์‹์˜ ๋‹ต์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ""(a or b)""์—์„œ ""a""ํ•ญ์ด ์ฐธ์ธ ๊ฒฝ์šฐ์—๋„ ๋งˆ์ฐฌ๊ฐ€์ง€์ด๋‹ค. ์—ฌ๊ธฐ์„œ ํ•ญ์ด ๋ณต์žกํ•œ ์‹์ด๋ฉด ์ด์ ์ด ๋งŽ๊ณ , ์‹์—์„œ ๊ฒฐ๊ณผ๊ฐ€ ์ฐธ์ด๋‚˜ ๊ฑฐ์ง“์ผ ๊ฐ€๋Šฅ์„ฑ๊ณผ ๊ณ„์‚ฐ์˜ ๋น„์šฉ์— ๋”ฐ๋ผ ์–ด๋–ค ํ•ญ์ด ๋จผ์ € ๊ณ„์‚ฐ๋˜์–ด์•ผ ์ข‹์€์ง€ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ""(a or b or c)""์™€ ๊ฐ™์€ ์‹์—์„œ ""a""ํ•ญ์ด ์ฐธ๊ฐ’์„ ๊ฐ€์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ๋งŽ๋‹ค๋ฉด, ์ „์ฒด ์‹์„ ์‰ฝ๊ฒŒ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฐ ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•ด, ์ปดํŒŒ์ผ๋Ÿฌ๋Š” ๋” ๊ณ„์‚ฐํ•ด์•ผ ํ•  ๊ฒƒ์ธ์ง€, ๋‹ค๋ฅธ ํ•ญ์„ ์ง€๋ฆ„๊ธธ ๊ณ„์‚ฐ ํ•ด์•ผ ํ•  ๊ฒƒ์ธ์ง€๋ฅผ ๊ฒ€์‚ฌํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ์ด๋Ÿฐ ๊ฒ€์‚ฌ๋Š” ๊ณ„์‚ฐ์„ ์ค„์ด๋Š” ๊ฒƒ์„ ์‹คํŒจํ•  ๊ฒฝ์šฐ๋‚˜ ๊ผญ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋ฌด์กฐ๊ฑด ์ „์ฒด ์‹์„ ๊ณ„์‚ฐํ•ด์•ผ ํ•  ๋•Œ ์‹œ๊ฐ„์ด ๋” ๋งŽ์ด ๊ฑธ๋ฆฌ๊ฒŒ ๋œ๋‹ค.
๋ฌธ์„œ 9
์ƒ๋ฌผํ•™์ ์œผ๋กœ๋Š” ์ธ๊ฐ„์˜ ๋‡Œ์— ํ•„์š”ํ•œ ์œ ์ „ ์ •๋ณด๋ฅผ ๊ฐ€์ง€๋Š” ์ธ๊ณต์ ์ธ ๊ฒŒ๋†ˆ์„ ์ ๋‹นํ•œ ์ˆ™์ฃผ์˜ ์„ธํฌ์— ์งœ๋„ฃ๋Š” ๊ฒƒ์œผ๋กœ ์ธ๊ณต์ ์œผ๋กœ ์ƒ๋ช…์„ ๋งŒ๋“œ๋Š” ๊ฒƒ๋„ ๊ฐ€๋Šฅํ• ์ง€๋„ ๋ชจ๋ฅธ๋‹ค๊ณ  ์ƒ๊ฐ๋˜๋ฉฐ, ๊ทธ๋Ÿฌํ•œ ์ธ๊ณต์ƒ๋ช…์ฒด๋Š” ์˜์‹์„ ๊ฐ€์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’๋‹ค. ๊ทธ๋ ‡์ง€๋งŒ ๊ทธ ์ƒ๋ช…์ฒด ์•ˆ์˜ ์–ด๋–ค ์†์„ฑ์ด ์˜์‹์„ ๋‚ณ๋Š” ๊ฒƒ์ผ๊นŒ? ๋น„์Šทํ•œ ๊ฒƒ์„ ๋น„์ƒ๋ฌผํ•™์ ์ธ ๋ถ€ํ’ˆ์—์„œ ๋งŒ๋“ค ์ˆ˜ ์—†๋Š” ๊ฒƒ์ธ์ง€? ์ปดํ“จํ„ฐ๋ฅผ ์„ค๊ณ„ํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ์ˆ ๋กœ ๊ทธ๋Ÿฌํ•œ ์˜์‹์ฒด๋ฅผ ๋‚ณ์„ ์ˆ˜ ์—†์„๊นŒ? ๊ทธ๋Ÿฌํ•œ ํ–‰์œ„๋Š” ์œค๋ฆฌ์ ์œผ๋กœ ๋ฌธ์ œ ์—†์„๊นŒ?์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ๋ฌธ์ œ๋ฅผ ์•ˆ๊ณ  ์žˆ๋‹ค.
๋ฌธ์„œ 10
3๋ถ€ ๋ฆฌ๊ทธ",,,2023-06-16 00:57:35.174375
"๋ชจ๋ธ ๊ฒฝ๋Ÿ‰ํ™” ๋ฐ ์ตœ์ ํ™”์™€ ๊ด€๋ จํ•˜์—ฌ Netspresso bot์—๊ฒŒ ๋ฌผ์–ด๋ณด์„ธ์š”.
์˜ˆ๋ฅผ๋“ค์–ด
- Why do I need to use Netspresso?
- Summarize how to compress the model with netspresso.
- Tell me what the pruning is.
- What kinds of hardware can I use with this toolkit?
- Can I use YOLOv8 with this tool? If so, tell me the examples.","1. Why do I need to use Netspresso?
Netspresso is a machine learning pipeline that provides tools and methods for model compression and optimization. By using Netspresso, you can reduce the size and computational cost of your models, making them more efficient for deployment on resource-constrained devices. It also allows you to fine-tune and optimize your models for better performance. Overall, Netspresso helps you achieve lightweight and optimized models for real-time applications.
2. Summarize how to compress the model with Netspresso.
To compress a model with Netspresso, you can follow these steps:
- Step 1: Prepare your model.
- Step 2: Upload your model to Netspresso.
- Step 3: Choose the compression method (e.g., structured pruning, filter decomposition).
- Step 4: Configure the compression settings (e.g., pruning ratio, compression algorithm).
- Step 5: Start the compression process.
- Step 6: Check the compression result and download the compressed model.
3. Tell me what pruning is.
Pruning is a model compression technique that involves removing unnecessary parameters or structures from a complex model to make it more compact and efficient. In the context of Netspresso, pruning can be done using structured pruning, which removes entire neurons, filters, or channels from the model. This results in a simplified model that can be deployed without any specific hardware or software requirements for acceleration.
4. What kinds of hardware can I use with this toolkit?
Netspresso supports various hardware platforms for model deployment. Some of the supported hardware includes NVIDIA Jetson family devices, which can be used with TensorRT for accelerated inference. Other supported platforms include TensorFlow Lite and OpenVINO, which provide optimization for different hardware architectures. The specific hardware compatibility may depend on the compression method and the target framework used in Netspresso.
5. Can I use YOLOv8 with this tool? If so, tell me the examples.
Yes, you can use YOLOv8 with Netspresso for model compression and optimization. YOLOv8 is a popular object detection model, and Netspresso provides support for object detection models. You can follow the steps in the Netspresso documentation to prepare, upload, compress, and optimize your YOLOv8 model using the available methods and tools in Netspresso. The documentation provides examples and guidelines for using Netspresso with different models, including YOLOv8.","Source 0
Step 4: Package model (beta)
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย Step 4: Package model (beta)Suggest EditsPackage the model so that it can be deployed directly to the device.
1. Go to Package page
Select New package at the drop-down menu that appears when you click Download button.
2. Package the model
Enter the package name and select a base model to package.
Please note that the package name will be the library name and the name cannot be changed after packaging.
You can include pre/post processing codes (.py) with the model for the package (optional).
Download the pre/post processing code example and modify for your use cases.
3. Download package file and run the package
Packaged file will be automatically downloaded.
To run the package, use the code written below. {package_name} must be changed to your package name.
Pythonfrom np_{package_name}.models.model import NPModel
NPModel.initialize(num_threads=1) # Initialize
npmodel = NPModel()
image_path = ""./test.jpg"" #Image path
print(npmodel.run(image_path)) # Inference
NPModel.finalize() # Memory management
Updated about 1 month ago Table of Contents
1. Go to Package page
2. Package the model
3. Download package file and run the package
######################################################
Source 1
Step 3: Convert model (beta)
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย Step 3: Convert model (beta)Suggest EditsConvert the model format according to the target device.
1. Go to Convert page
Click the Convert button on Models page.
2. Covert model
Enter the name and memo for the converted model. Select a base model to be converted and the target hardware to benchmark the model.
Depending on the framework of the base model, the options available for converting are different.
Models built with Model Searcher โ†’ TensorRT, TensorFlow Lite, OpenVINO
Custom models
ONNX โ†’ TensorRT, TensorFlow Lite, OpenVINO
Click the Start converting button to convert the model. (Converting for the NVIDIA Jetson family (TensorRT) may take up to 1 hour.)
3. Check the converting result
Converted model will be displayed on the Models page with performance benchmarks on the selected target hardware.Updated 6 months ago Table of Contents
1. Go to Convert page
2. Covert model
3. Check the converting result
######################################################
Source 2
Method: Structured Pruning
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย Method: Structured PruningSuggest EditsModel Compression
The goal of model compression is to achieve a model that is simplified from the original without performance deterioration. By compressing the large model, the user can reduce the storage and computational cost and allow to use in real-time applications.
NetsPresso supports the following compression methods.
Structured Pruning
Filter Decomposition
This page describes for Structured Pruning.
What is ""Pruning""?
Pruning is the process of removing individual or groups of parameters from a complex model to make it faster and more compact. This compressing procedure is divided into unstructured pruning and structured pruning by the pruning objects.
Unstructured Pruning
: Removes individual parameters and returns a sparse model, which requires an additional device to be accelerated.
Structured Pruning
: Removes entire neurons, filters, or channels and returns a model, which does not require any particular hardware or software to be accelerated.
The goal of pruning is to reduce the computational resources and accelerate the model by removing unnecessary filters (Model Compressor only supports structured pruning. Unstructured pruning will be published in near future.).
ย ย However, the fine-tuning process is necessary to compensate for the loss of accuracy.
Structured Pruning
Supported functions
Pruning in Model Compressor provides two pruning functions (Pruning by Channel Index / Criteria) and one recommendation (SLAMP) to fulfill the user's demand on model compression.
Pruning by Channel Index
ย ย Removes the filters that a user wants to. If the selected filters are redundant or less important, it will return a better performing model.
Pruning by Criteria
L2 Norm
: L2-Norm is used to represent the importance of the corresponding filter. In other words, this method prunes filters based on the magnitude of weights.
Nuclear Norm
: The nuclear norm is the sum of the singular values representing the energy. It computes the nuclear norm on the feature map to determine the filter's relevance. For this reason, a portion of the dataset is needed. For more detail, please refer to the following paper.
Seul-Ki Yeom, Kyung-Hwan Shim, and Jee-Hyun Hwang. Toward compact deep neural networks via energy-aware pruning. arXiv preprint, 2021.
Geometric Median
: Geometric Median is used to measure the redundancy of the corresponding filter and remove redundant filters. For more detail, please refer to the following paper.
Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In CVPR, 2019.
Normalization
The distribution and magnitude of the layers are varied, it is vital to compare those different distributions from the same perspective. For this reason, all of the criterion values are normalized by layer as follows.
""Recommendation"" in Model Compressor
ย ย The ""Recommendation"" enables a so-called global pruning, which allocates the pruning ratio for each layer at ease. Current version only supports SLAMP.
SLAMP (Structured Layer-adaptive Sparsity for the Magnitude-based Pruning)
SLAMP is inspired by the ""Layer-adaptive Sparsity for the Magnitude-based Pruning"" from ICLR 2021, which is called LAMP.
Layer-Adaptive sparsity for the Magnitude-based Pruning (LAMP) is an unstructured pruning method, but here, it is modified and developed to measure the layer-wise importance for the Structured pruning.
Normalization function
Following normalization function is adopted into the above criteria value.
What you can do with Model Compressor
Choose one of ""Pruning by Channel Index"" or ""Pruning by Criteria"" by the purpose of the user.
""Pruning by Channel Index"" is recommended for the expert, who already knows which filter is unnecessary to solve the issue.
""Pruning by Criteria"" is recommended for the user who wants to prune certain amounts of ratio on specific layers or beginner of the model compression.
To use ""Pruning by Channel Index""
Check ""Pruning by Channel Index"".
Check the layers to be pruned.
Insert the filter index to be pruned (ex. 105, 8, 9, 11-13).
To use ""Pruning by Criteria"".
Check one of the criteria (ex. L2-Norm) to calculate the importance score of each filter.
Prune specific layers with a certain amount of ratio.
Check the layers to be pruned.
Insert the amounts of the ratio to be pruned (ex. 0.2).
Compress whole layers automatically for the given amounts of ratio.
Press ""Recommendation"" and insert the amounts of the ratio (ex. 0.8) to compress the model.
Verification of the Structured Pruning
Pruning in Model Compressor allows to remove the specific filters and preserve the previous knowledge. To verify the pruning method in Model Compressor, we assess the categorical accuracy of the returned compressed model and its fine-tuned model.
Cifar100 Accuracy of the Compressed Model and Fine-tuned Model
The compressed models are compressed by the following strategies: L2Norm, SLAMP, intersection. (Original models are from here)
The dashed and solid lines in the above image indicate the compressed model from Model Compressor and the model after fine-tuning, respectively, for the given pruning ratio.
๐ŸšงNot Supported Layers
Group convolutional layer is currently not supported and will be updated in the near future.
Updated 5 months ago Table of Contents
Model Compression
What is ""Pruning""?
Structured Pruning
Supported functions
What you can do with Model Compressor
Verification of the Structured Pruning
",,,2023-06-28 07:12:14.309344
What is NetsPresoo?,"NetsPresso is a machine learning pipeline that provides various tools and functionalities for model development and optimization. It includes components such as Model Searcher, Model Compressor, and Model Launcher. NetsPresso aims to simplify the process of developing and deploying machine learning models by offering features like dataset preparation, model compression, and model deployment. It supports tasks like image classification, object detection, semantic segmentation, and super resolution. NetsPresso also provides benchmark results and best practices for different models and datasets.","Source 0
Model Compressor
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย Model CompressorSuggest EditsUsing NetsPresso Model Compressor on a variety of well-known public datasets, this article shows users how to get outstanding compression results.
The fine-tuning procedure is necessary for each compression. It usually follows the original model's training configuration, except the learning rate. After a few batches of training, the learning rate is optimized by determining if the loss has converged or not.
All of the original and compressed models can be downloaded easily on the Model Compressor Model Zoo.
See Image Classification Results
See Object Detection Results
See Semantic Segmentation Results
See Super Resolution ResultsUpdated 7 months ago Table of Contents
See Image Classification Results
See Object Detection Results
See Semantic Segmentation Results
See Super Resolution Results
######################################################
Source 1
Connect Personal Server
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย Connect Personal ServerSuggest EditsTo connect a personal server, start by clicking a 'New Server' button on the screen below.
location : My Account > Resources
The way to connect a personal server to NetsPresso is to install an agent on the personal server.
The process is as follows.
Step 1. Create Server
Specifies an identifiable name and the path where the agent will be installed.
Step 2. Set Server
Copy the script that pre-checks the server environment and receives server information.
Make the copied script into sh file and run it in the shell.
As a result of executing the script, you can see server information in json format as shown below.
Copy the server information in json format and paste it into the result input field.
Step 3. Connect Server
As shown below, check the server information and copy the connect script that can connect the server.
Make the copied script into sh file and run it in the shell.
As a result of execution, the server is connected as shown below.
You can check the server you have connected to on the Resources page.
Updated about 1 month ago Table of Contents
Step 1. Create Server
Step 2. Set Server
Step 3. Connect Server
######################################################
Source 2
- Object Detection
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย - Object DetectionSuggest EditsAll of the original and compressed models can be downloaded easily on the Model Compressor Model Zoo.
You can get Compressed results with Automatic Compression and Compressed (Adv.) results with Advanced Compression.
PyTorch
ModelBest PracticeTypeDatasetmAP(0.5) (%)mAP(0.5:0.95)(%)FLOPs (M)Params (M)Latency (ms)Model Size (MB)YOLOXOriginalCOCO68.049.7156006.2054.2112239.46207.37YOLOXGoogle ColabCompressed-1COCO67.16 (-0.84)48.64 (-1.06)101804.06 (1.53x)19.96 (2.7x)8502.72 (1.44x)76.61 (2.7x)YOLOXGoogle ColabCompressed-2COCO61.43 (-6.57)43.23 (-5.47)38607.03 (4.04x)4.93 (11.0x)4235.37 (2.89x)19.17 (10.80x)
The modelโ€™s latency is measured on Raspberry Pi 4B (1.5GHz ARM Cortex).
Options: FP32, ONNX runtime
TensorFlow-Keras
ModelBest PracticeTypeDatasetmAP(0.5) (%)mAP(0.5:0.95)(%)FLOPs (M)Params (M)Latency (ms)Model Size (MB)YOLOv4OriginalPASCAL VOC82.22-61871.8265.3264318.70262.90YOLOv4Google ColabCompressed-1PASCAL VOC87.23 (+5.01)-11459.69 (5.4x)10.59 (6.17x)28651.70 (2.16x)44.12 (5.96x)YOLOv4Google ColabCompressed-2PASCAL VOC87.91 (+5.69)-14442.96 (4.28x)10.71 (6.1x)28976.40 (2.14x)44.36 (5.93x)
YOLOv4 model with EfficientNet B1 based backbone.
The modelโ€™s latency is measured on Raspberry Pi 4B (1.5GHz ARM Cortex).
Options: FP32, TFLite
Updated about 2 months ago Table of Contents
PyTorch
TensorFlow-Keras
######################################################
Source 3
- Object Detection
Jump to ContentDocumentationHomeDocumentationv1.3.0DocumentationLog InLog InMoon (Dark Mode)Sun (Light Mode)v1.3.0HomeDocumentationSearchINTRODUCTIONWelcomeWhat is NetsPresso?Machine Learning Pipeline with NetsPressoAdvantages of NetsPressoBENCHMARKSModel Searcher- Object DetectionModel Compressor- Image Classification- Object Detection- Semantic Segmentation- Super ResolutionUSE CASESFree People Detection ModelsFree Vehicle Detection ModelsNETSPRESSO MODEL SEARCHERFeatures & Scope of supportStep 1: Prepare datasetStep 2: Upload datasetStep 3: Create project (Quick Search)Step 3: Create project (Retraining)Step 4: Check the project result and download a modelNETSPRESSO MODEL COMPRESSORFeatures & Scope of supportMethod: Structured PruningMethod: Filter DecompositionSupported modelsSupported ONNX operatorsStep 1: Prepare modelStep 2: Upload modelStep 3: Compress model (Automatic Compression)Step 3: Compress model (Advanced Compression)Step 4: Check the compression result and download a modelStep 5: Retrain the compressed modelNETSPRESSO MODEL LAUNCHERFeatures & Scope of supportStep 1: Prepare modelStep 2: Upload modelStep 3: Convert model (beta)Step 4: Package model (beta)Personal serverRequirementsConnect Personal ServerRELEASE NOTESNetsPresso ReleasesFAQAbout Credit?Powered byย - Object DetectionSuggest EditsQuick Search supports to train a model based on open sources and NPNets will be available with Advanced Search.
YOLOv5 and YOLOv5-NPNets
DatasetModelmAP(0.5) (%)mAP(0.5:0.95) (%)GFLOPsParameters (M)Model Size (MB)PASCAL VOCYOLOv5n72.2444.31.793.62PASCAL VOCYOLOv5n-NPNet73.446.33.51.182.49PASCAL VOCYOLOv5s77.952.2167.0713.7PASCAL VOCYOLOv5s-NPNet80.25612.84.619.05PASCAL VOCYOLOv5m82.159.348.320.9540.2PASCAL VOCYOLOv5m-NPNet83.460.63712.2623.7PASCAL VOCYOLOv5l82.961108.346.2488.6PASCAL VOCYOLOv5l-NPNet85.163.88125.1948.5Updated 5 months ago Table of Contents
YOLOv5 and YOLOv5-NPNets
",,,2023-08-31 13:29:07.271798