text
stringlengths
16
313
ํ•„์š” ์‹œ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์™ธ๋ถ€์‹œ์Šคํ…œ์ด ์‹œ์Šคํ…œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋Š” FTP ๊ณ„์ •๊ณผ ๋น„๋ฐ€๋ฒˆํ˜ธ ์„ค์ •ํ•˜๋ฉฐ ๋‹ค๋ฅธ ๋””๋ ‰ํ† ๋ฆฌ์— ์ ‘๊ทผ์„ ๋ง‰๋Š”๋‹ค.
์™ธ๋ถ€์‹œ์Šคํ…œ๊ณผ ๋ฐ์ดํ„ฐ๋ฅผ ์ฃผ๊ณ  ๋ฐ›๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜๋Š” ๋””๋ ‰ํ† ๋ฆฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.
SOA๋Š” ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ธฐ๋ณธ์ ์ธ ํ‘œ์ค€ ๋นŒ๋”ฉ ๋ธ”๋Ÿญ ๋‹จ์œ„๋กœ ๋ถ„ํ• ํ•˜์—ฌ, ์ด๋ฅผ IT ํ”„๋กœ์„ธ์Šค์™€ ์œ ์—ฐํ•˜๊ฒŒ ์ผ์น˜์‹œํ‚ค๋Š” ํŠน์ง•์ด ์žˆ๋‹ค.
๊ณต๊ณต์ง€์› ๋ฏผ๊ฐ„์ž„๋Œ€์ฃผํƒ์€ ์ฃผ๋ณ€ ์‹œ์„ธ์˜ 95 ์ดํ•˜์˜ ์ €๋ ดํ•œ ์ž„๋Œ€๋ฃŒ ํ˜œํƒ ์™ธ์—๋„ ํ’ˆ์งˆ์ข‹์€ ์ฃผํƒ์—์„œ 8๋…„๊ฐ„ ์•ˆ์‹ฌํ•˜๊ณ  ์‚ด ์ˆ˜ ์žˆ๋Š” ์žฅ์ ์ด ์žˆ๋‹ค
์œ„๊ธฐ์•„๋™ ๋ฐœ์ƒ์„ ๋ฏธ๋ฆฌ ์˜ˆ๋ฐฉํ•  ์ˆ˜ ์žˆ๋Š” ์ง€์›ํ™œ๋™์ด๋‚˜ ์ธํ”„๋ผ ๊ตฌ์ถ•๋ณด๋‹ค๋Š” ํ”ผํ•ด์•„๋™์ด ๋ฐœ์ƒํ–ˆ์„ ๋•Œ ๊ทธ๋•Œ๊ทธ๋•Œ ๋Œ€์ฒ˜ํ•˜๋Š” ๋ฐฉ์‹์—์„œ ๋ฒ—์–ด๋‚˜์ง€ ๋ชปํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค
์—ฌ์•ผ๊ฐ€ ๊ฐ์ข… ํ˜„์•ˆ๋งˆ๋‹ค ์ •์น˜์  ํƒ€ํ˜‘์„ ๋ชจ์ƒ‰ํ•˜๋Š” ๋Œ€์‹  ๊ฒ€์ฐฐ๊ณผ ๋ฒ•์›์„ ์ฐพ์•„ ์ƒ๋Œ€๋ฅผ ์ฒ˜๋ฒŒํ•ด ๋‹ฌ๋ผ๊ณ  ๋…์ด‰ํ•˜๋Š” ์ •์น˜์˜ ์‚ฌ๋ฒ•ํ™” ๊ฐ€ ๊ทน์‹ฌํ•ด์ง€๊ณ  ์žˆ๋‹ค๋Š” ์ง€์ ์ด๋‹ค
๋ถ€๋™์‚ฐ114๋Š” ์ „๊ตญ ์˜คํ”ผ์Šคํ…”์˜ ์—ฐ๋„๋ณ„ ์ž„๋Œ€์ˆ˜์ต๋ฅ  ์ถ”์ด๋ฅผ ๋ถ„์„ํ•œ ๊ฒฐ๊ณผ 2018๋…„ ๋ง ๊ธฐ์ค€ ์—ฐ 5 ์˜ ์ž„๋Œ€์ˆ˜์ต๋ฅ ์ด ๋ถ•๊ดด๋œ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค๊ณ  12์ผ ๋ฐํ˜”๋‹ค
๋ฆฌ์–ผ๋ฏธํ„ฐ๋Š” 9 13 ๋ถ€๋™์‚ฐ๋Œ€์ฑ… ๋ฐœํ‘œ ์งํ›„ ์ข…๋ถ€์„ธ ๊ณผํ‘œ ํ˜ผ์„  ์ ์šฉ๋Œ€์ƒ ํ™•๋Œ€ ์˜ค๋ณด์— ์ด์€ ์„ธ๊ธˆํญํƒ„ ๋…ผ๋ž€์ด ํ™•๋Œ€๋œ ๋ฐ ๋”ฐ๋ฅธ ๊ฒƒ์œผ๋กœ ํ’€์ดํ–ˆ๋‹ค
๊ฐ• ์žฅ๊ด€์€ ๋Ÿฌ์‹œ์•„๊ฐ€ ์œ ์—” ์•ˆ์ „๋ณด์žฅ์ด์‚ฌํšŒ ๋น„๊ณต๊ฐœํšŒ์˜๋ฅผ ์š”์ฒญํ•œ ๊ฒƒ๊ณผ ๊ด€๋ จํ•ด ๋Œ€๋ถ์ œ์žฌ ๋ฌธ์ œ๊ฐ€ ๋…ผ์˜๋˜์ง€ ์•Š์„๊นŒ ์˜ˆ์ƒํ•œ๋‹ค ๋ฉฐ ๋Ÿฌ์‹œ์•„์˜ ์š”์ฒญ๋„ ์‚ฌ์ „์— ์ธ์ง€ํ–ˆ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค
๋ฏผ ์ฒญ์žฅ์€ ๊ฐœ์„ ๋ฐฉ์•ˆ๊ณผ ๊ด€๋ จํ•ด ๋ฌผ๋ฆฌ๋ ฅ ํ–‰์‚ฌ ๊ธฐ์ค€ํ‘œ๋„ ์ตœ์ข… ๊ฒ€ํ† ๋‹จ๊ณ„์— ์žˆ๋‹ค ๋ฉด์„œ ์œ ํ˜•๋ณ„๋กœ ์„ธ์„ธํ•œ ๊ฐœ์„ ๊ณผ์ œ๋ฅผ ์ ์šฉํ•ด์„œ ์ •๋น„ํ•ด์•ผ ํ•  ๊ฒƒ์ด ์žˆ๋‹ค ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค
ํ•œ ๋„คํ‹ฐ์ฆŒ์ด ๊ฐ•์ œ์ถ”ํ–‰ ํ˜์˜๋กœ ๋ฒ•์ •๊ตฌ์† ๋œ ๋‚จํŽธ์˜ ์–ต์šธํ•จ์„ ์ฃผ์žฅํ•œ ์‚ฌ๊ฑด๊ณผ ๊ด€๋ จ ๋ฒ•์› ๊ด€๊ณ„์ž๋Š” ๋‹ด๋‹น ํŒ์‚ฌ๊ฐ€ ๊ฐ๊ด€์ ์œผ๋กœ ํŒ๋‹จํ•œ ๊ฒƒ ์ด๋ผ๊ณ  ๋ฐํ˜”๋‹ค
๋”ฐ๋ผ์„œ ๊ธฐ์—…์ด ๋‚ด๋ถ€๋Š” ๋ฌผ๋ก  ํ˜‘๋ ฅ์‚ฌ ๋ฐ ๊ณ ๊ฐ๊ณผ ํ‘œ์ค€ํ™”๋œ e-๋น„์ฆˆ๋‹ˆ์Šค ํ™˜๊ฒฝ์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹ค.
์ด๋ฒˆ์— ๋ฐœํ‘œ๋œ ์†”๋ฃจ์…˜๊ตฐ์€ IBM ์†Œํ”„ํŠธ์›จ์–ด์‚ฌ์—…๋ณธ๋ถ€, ์„œ๋น„์Šค์‚ฌ์—…์กฐ์ง์ธ ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค, ๊ทธ๋ฆฌ๊ณ  ์ปจ์„คํŒ…์‚ฌ์—…์กฐ์ง์ธ IBM ๋น„์ฆˆ๋‹ˆ์Šค์ปจ์„คํŒ…์„œ๋น„์Šค(IBM BCS)๊ฐ€ ์ฃผ์ถ•์ด ๋˜์–ด ๊ณต๊ธ‰ํ•œ๋‹ค.
์†”๋ฃจ์…˜์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ์€ SOA ๊ธฐ๋ฐ˜ ์œ„์— ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ•ํ•˜๊ณ  ํ†ตํ•ฉํ•˜๋Š” ์†Œํ”„ํŠธ์›จ์–ด ์ œํ’ˆ, ํ™˜๊ฒฝ ํ‰๊ฐ€, ์ „๋žต ์ˆ˜๋ฆฝ ๋ฐ ๊ธฐํš, ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ํ˜์‹ , ์ปดํฌ๋„ŒํŠธ ๋น„์ฆˆ๋‹ˆ์Šค ๋ชจ๋ธ๋ง ์ปจ์„คํŒ… ๋“ฑ 5์ข…์ด๋‹ค.
๋˜ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ํ˜์‹  ๋ฐ ํ†ตํ•ฉ ์„œ๋น„์Šค, ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค ๋งต์„ ์ž‘์„ฑํ•˜๊ณ  ๋น„์ฆˆ๋‹ˆ์Šค๋ฅผ ์ง์›-ํ”„๋กœ์„ธ์Šค-์‹œ์Šคํ…œ์ด ๋‹ด๋‹นํ•œ ์—…๋ฌด๋ณ„๋กœ ๊ตฌ๋ถ„ํ•˜๋Š” ์ปดํฌ๋„ŒํŠธ ๋น„์ฆˆ๋‹ˆ์Šค ๋ชจ๋ธ๋ง ์ปจ์„คํŒ… ์„œ๋น„์Šค ๋“ฑ์ด ์žˆ๋‹ค.
IBM์€ ์ด๋ฒˆ ์†”๋ฃจ์…˜ ๋ฐœํ‘œ์™€ ํ•จ๊ป˜ ์ด ๋ถ„์•ผ ์‚ฌ์—…์„ ๋Œ€ํญ ๊ฐ•ํ™”ํ•˜๊ฒŒ ๋˜์—ˆ๋‹ค.
โ€œIBM์˜ SOA ์†”๋ฃจ์…˜์€ ๊ธฐ์—…์ด ์˜จ๋””๋งจ๋“œ e-๋น„์ฆˆ๋‹ˆ์Šค๋ฅผ ๋‹ฌ์„ฑํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๊ธฐ์ˆ ์„ ์ œ๊ณตํ•˜๋ฉฐ ๊ณ ๊ฐ, ์ œํœด์‚ฌ, ํ˜‘๋ ฅ์—…์ฒด ์ „๋ฐ˜์— ๊ฑธ์ณ์„œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ํ†ตํ•ฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•จ์œผ๋กœ์จ ๊ณ ๊ฐ์˜ ์š”๊ตฌ, ์‹œ์žฅ ๋ณ€ํ™”, ์™ธ๋ถ€์  ์š”์ธ์— ์‹ ์†ํ•˜๊ฒŒ ์ ์‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค.
์›น์Šคํ”ผ์–ด ๋น„์ฆˆ๋‹ˆ์Šค ์ธํ‹ฐ๊ทธ๋ ˆ์ด์…˜ ์„œ๋ฒ„ ํŒŒ์šด๋ฐ์ด์…˜: ๊ณ ๊ฐ์ด ์„œ๋น„์Šค ์ง€ํ–ฅ์  ์•„ํ‚คํ…์ฒ˜๋กœ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ• ๋ฐ ํ†ตํ•ฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ์†”๋ฃจ์…˜.
๋Œ€๊ธฐ์—… ์ œํ’ˆ์œผ๋กœ์„œ๋Š” ๋น„์ฆˆ๋‹ˆ์Šค ๋กœ์ง์„ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•œ ์‚ฐ์—…ํ‘œ์ค€ ๊ทœ๊ฒฉ์ธ โ€œ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค ์‹คํ–‰ ์–ธ์–ด(BPEL, Business Process Execution Language)โ€๋ฅผ ์ง€์›ํ•˜๋Š” ์ตœ์ดˆ์˜ ์ œํ’ˆ์ด๋‹ค.
์›น์Šคํ”ผ์–ด ๋น„์ฆˆ๋‹ˆ์Šค ์ธํ‹ฐ๊ทธ๋ ˆ์ด์…˜์„ ์ด์šฉํ•จ์œผ๋กœ์จ ๊ธฐ์กด ์›น ์„œ๋น„์Šค ๋ฐ ํŒจํ‚ค์ง€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์ด์šฉํ•ด ์žฌ์‚ฌ์šฉ๊ฐ€๋Šฅ ์„œ๋น„์Šค๋ฅผ ๊ตฌ์ถ•ํ•˜๊ณ  ์„œ๋น„์Šค๋ฅผ ๊ฒฐํ•ฉํ•ด์„œ ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค์™€ ์†Œํ”„ํŠธ์›จ์–ด ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์—ฐ๋™์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค.
IBM SOA ํ‰๊ฐ€ ์„œ๋น„์Šค: ํ˜„์žฌ SOA๋ฅผ ๋„์ž…ํ•˜๋ ค๋Š” ๊ณ ๊ฐ์ด ๊ณ„ํšํ•˜๊ณ  ์žˆ๋Š” SOA์˜ ๊ธฐ๋Šฅ์  ๋ฐ ๊ธฐ์ˆ ์  ์ธก๋ฉด์„ ํ‰๊ฐ€ํ•˜๋Š” ์„œ๋น„์Šค๋กœ, IBM ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค๊ฐ€ ์ œ๊ณตํ•œ๋‹ค .
๊ตญ๋‚ด ๋ฐ ํ•ด์™ธ ๊ณ„์—ด์‚ฌ์˜ ์‹ ๊ทœ ์—ฐ๊ณ„ ๋ฐ ์˜คํ”ˆ์„ ์œ„ํ•˜์—ฌ ๋‹ค์Œ ์ ˆ์ฐจ์— ๋”ฐ๋ผ ์ง„ํ–‰ํ•˜๋ฉฐ, ๊ณ„์—ด์‚ฌ๊ฐ€ ์‹ ๊ทœ ์—ฐ๊ณ„๋  ๋•Œ๋งˆ๋‹ค ๊ด€๋ จ ์‚ฐ์ถœ๋ฌผ์„ ์ฐธ์กฐํ•˜๊ณ  ํ•ญ์ƒ ์ตœ์‹ ์œผ๋กœ ์—…๋ฐ์ดํŠธ ํ•œ๋‹ค.
์‹ ๊ทœ ๊ณ„์—ด์‚ฌ ์—ฐ๊ณ„๋ฅผ ์œ„ํ•œ ์‚ฌ์ „ ์ค€๋น„๋ฅผ ๋งˆ์น˜๊ณ  ๊ณ„์—ด์‚ฌ ์„œ๋ฒ„์— EAI Agent๋ฅผ ์„ค์น˜ํ•˜๊ณ  ๋‚˜๋ฉด, ๊ณ„์—ด์‚ฌ์˜ CASE์— ํ•ด๋‹นํ•˜๋Š” ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ EAI ์ธํ„ฐํŽ˜์ด์Šค ๊ฐœ๋ฐœ ๋ฐฉ์•ˆ์— ๋”ฐ๋ผ ์ง„ํ–‰ํ•œ๋‹ค.
Refer to the V$SYSTEM_EVENT view for time waited and average waits for thefollowing actions:
To estimate the time waited for reads incurred by rereading data blocks that had tobe written to disk because of a request from another instance, multiply the statistic(for example, the time waited for db ๏ฌle sequential reads) by the percentage of readI/O caused by previous cache ๏ฌ‚ushes as shown in this formula:
Where "lock buffers for read" is the value for lock converts from N to S derived fromV$LOCK_ACTIVITY and "physical reads" is from the V$SYSSTAT view.
Similarly, the proportion of the time waited for database ๏ฌle parallel writes causedby pings can be estimated by multiplying db ๏ฌle parallel write time as found inV$SYSTEM_EVENTS by:
Table 11-1 describes some global cache coherence-related views and the types ofstatistics they contain.
Refer to V$SYSSTAT to countrequests for the actions shownto the right.
Note: Also refer to the convert type-speci๏ฌc rows in V$LOCK_ACTIVITY.
Refer to V$SYSSTAT for theamount of time waited for theactions shown to the right.
As mentioned, it is useful to maintain application pro๏ฌles per transaction and perunit of time.
This allows you to compare two distinct workloads or to detectchanges in a workload.
The rates are also helpful in determining capacities and foridentifying throughput issues.
Oracle recommends that you incorporate thefollowing ratios of statistics in your performance monitoring scripts:
Calculate the same statistics per second or minute by dividing the total counts ortimes waited by the measurement interval.
The percentage of buffers accessed for global work or the percentage of I/O causedby inter-instance synchronization can be important measures of how ef๏ฌcient yourapplication processes share data.
It can also reveal whether the database is designedfor optimum scalability.
Use the following calculation to determine the percentage of buffer accesses forlocal operations, in other words, reads and changes of database buffers that are notsubject to a lock conversion:
Similarly, compute the percentage of read and write I/O for local operations usingthe following equations:
This calculation implies the percent of times DBWR writes for local work.
This calculation implies the number of percent reads by user processes for localwork only; it does not refer to forced reads.
In the previous formula, the physical read statistic from V$SYSSTAT is combinedwith the "Lock buffers for read" value from V$LOCK_ACTIVITY.
You can base thelocal write ratio entirely on the corresponding values from V$SYSSTAT.
((consistent gets db block gets) (global cache gets global cache converts) 100)
11-12 Oracle8i Parallel Server Administration, Deployment, and Performancepatterns.
Moreover, they represent the probability that a data block access is eitherglobal or local.
You can therefore use this information as a rough estimator inscalability calculations.
If your application is not performing well, analyze each component of theapplication to identify which components are causing problems.
To do this, checkthe operating system and DLM statistics, as explained under the next heading, forindications of contention or excessive CPU usage.
Excessive lock conversions thatyou can measure with speci๏ฌc procedures may reveal excessive read/write activityor high CPU requirements by DLM components.
Examine the statistics from this view andanalyze the hit ratios in the shared pool and the buffer cache.
These are the result of inserts into index blocks when multipleinstances share a sequence generator for primary key values.
You may need to use a multiplier such as SEQUENCE_NUMBER xINSTANCE_NUMBER x 1,000,000,000 to prevent the instances from inserting newentries into the same index.
Creating a sequence without using the CACHE clause may create a lot of overhead.
The chapter describes Oracle Parallel Server and Cache Fusion-related statistics andprovides procedures that explain how to use these statistics to monitor and tuneperformance.
This chapter also brie๏ฌ‚y explains how Cache Fusion resolvesreader/writer con๏ฌ‚icts in Oracle Parallel Server.
It describes Cache Fusionโ€™s bene๏ฌtsin general terms that apply to most types of systems and applications.
The topics in this chapter include:
When a data block requested by one instance is in the memory cache of a remoteinstance, Cache Fusion resolves the read/write con๏ฌ‚ict using remote memoryaccess, not disk access.
The requesting instance sends a request for a consistent-readcopy of the block to the holding instance.
The Block Server Process (BSP) on theholding instance transmits the consistent-read image of the requested block directlyfrom the holding instanceโ€™s buffer cache to the requesting instanceโ€™s buffer cacheacross a high speed interconnect.
As Figure 12-1 illustrates, Cache Fusion enables the buffer cache of one node tosend data blocks directly to the buffer cache of another node by way of low latency,high bandwidth interconnects.
This reduces the need for expensive disk I/O inparallel cache management.
Cache Fusion also leverages new interconnect technologies for low latency,user-space based, interprocessor communication.
This potentially lowers CPUusage by reducing operating system context switches for inter-node messages.
Note: Cache Fusion is always enabled.
Cache Fusion only solves part of the block con๏ฌ‚ict resolution issue by providingimproved scalability for applications that experience high levels of reader/writercontention.
For applications with high writer/writer concurrency, you also need toaccurately partition your applicationโ€™s tables to reduce the potential forwriter/writer con๏ฌ‚icts.
Cache Fusion improves application transaction throughput and scalability byproviding:
Applications demonstrating high reader/writer con๏ฌ‚ict rates under disk-basedPCM bene๏ฌt the most from Cache Fusion.
Packaged applications also scale moreeffectively as a result of Cache Fusion.
Applications in which OLTP and reportingfunctions execute on separate nodes may also bene๏ฌt from Cache Fusion.
Thisreduces the pinging of data blocks to disk.
Performance gains are derived primarilyfrom reduced X-to-S lock conversions and the corresponding reduction in disk I/Ofor X-to-S lock conversions.
Furthermore, the instance that was changing the cached data block before itreceived a read request for the same block from another instance would not have torequest exclusive access to the block again for subsequent changes.
This is becausethe instance retains the exclusive lock and the buffer after the block is shipped to thereading instance.
Because Cache Fusion exploits high speed IPCs, Oracle Parallel Server bene๏ฌts fromthe performance gains of the latest technologies for low latency communicationacross cluster interconnects.
Cache Fusion reduces CPU utilization by taking advantage of user-mode IPCs, alsoknown as "memory-mapped IPCs", for both Unix and NT based platforms.
If theappropriate hardware support is available, operating system context switches areminimized beyond the basic reductions achieved with Cache Fusion alone.
This alsoeliminates costly data copying and system calls.
Once your interconnect is operative, you cannot signi๏ฌcantly in๏ฌ‚uence itsperformance.
Interconnects that support Oracle Parallel Server and Cache Fusion use one of theseprotocols:
Oracle Parallel Server can use any interconnect product that supports theseprotocols.
The interconnect product must also be certi๏ฌed for Oracle Parallel Serverhardware cluster platforms.
Cache Fusion performance levels may vary in terms of latency and throughputfrom application to application.
Performance is further in๏ฌ‚uenced by the type andmixture of transactions your system processes.
The performance gains from Cache Fusion also vary with each workload.
Thehardware, the interconnect protocol speci๏ฌcations, and the operating systemresource usage also affect performance.
If your application did not demonstrate a signi๏ฌcant amount of consistent-readcontention prior to Cache Fusion, your performance with Cache Fusion will likelyremain unchanged.
However, if your application experienced numerous lockconversions and heavy disk I/O as a result of consistent-read con๏ฌ‚icts, yourperformance with Cache Fusion should improve signi๏ฌcantly.
A comparison of the locking and I/O statistics for Oracle 8.
Fusion statistics to monitor inter-instance performance.
The main goal of monitoring Cache Fusion and Oracle Parallel Server performanceis to determine the cost of global processing and quantify the resources required tomaintain coherency and synchronize the instances.
Do this by analyzing theperformance statistics from several views as described in the following sections.
Use these monitoring procedures on an ongoing basis to observe processing trendsand to maintain processing at optimal levels.
Many statistics are available to measure the work done by different components ofthe database kernel, such as the cache layer, the transaction layer or the I/O layer.
Moreover, timed statistics allow you to accurately determine the time spent onprocessing certain requests or the time waited for speci๏ฌc events.
From these statistics sources, work rates, wait time and ef๏ฌciency ratios can bederived.
See Also: Chapter 7 for more information on lock types.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card