<?xml version="1.0" encoding="UTF-8"?>
<html>
    <head>
        <link type="text/css" rel="stylesheet" href="./css/template.css" />
        <link type="text/css" rel="stylesheet" href="./css/SyntaxHighlighter.css" />
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
        <title>Creation of the RAID level 1 for the root folder</title>
        <script language="javascript" src="./js/shInit.js" />
        <script language="javascript" src="./js/shCore.js" />
        <script language="javascript" src="./js/shBrushCpp.js" />
        <script language="javascript" src="./js/shBrushCSharp.js" />
        <script language="javascript" src="./js/shBrushCss.js" />
        <script language="javascript" src="./js/shBrushDelphi.js" />
        <script language="javascript" src="./js/shBrushJava.js" />
        <script language="javascript" src="./js/shBrushJScript.js" />
        <script language="javascript" src="./js/shBrushPhp.js" />
        <script language="javascript" src="./js/shBrushPython.js" />
        <script language="javascript" src="./js/shBrushRuby.js" />
        <script language="javascript" src="./js/shBrushSql.js" />
        <script language="javascript" src="./js/shBrushVb.js" />
        <script language="javascript" src="./js/shBrushXml.js" />
        <keywords>RAID,1,solaris,9,root,racine</keywords>
        <author>Serhiy KVITKA</author>
    </head>
    <body>
        <div class="chapter">
            <h2>Introduction</h2>
            <p>In order to explain you how to set up the RAID1, we will take as example a root partition which is on /dev/dsk/c0t0d0s0, and the mirror on a slice with the same disk space which is on /dev/dsk/c0t1d0s0. For the storage of State Database, two partitions are created, with a size of 16Mo : /dev/dsk/c0t0d0s7 and /dev/dsk/c0t1d0s7.<p>
            <p>
				To increase production, it is recommended to store the State Database on physical disks, different of those on which the partitions which gather the RAID are stored.
                <br />
				Here we study the simplest case of two physical disks in the system.
            </p>
        </div>
        <div class="chapter">
            <h2>The creation of the State Database (SD)</h2>
            <p>
				Before the creation of all RAID in the Solaris system, it is essential to have the <acronym title="State Database">SD</acronym> where is stored the configuration and the state of all RAID in the system.
            </p>
            <p>For the correction work Solstice DiskSuite, we have to have at least two copies on SD for each disk.</p>
            <p>
				For the storage of <acronym title="State Database">SD</acronym> in the case we have to create the RAID for a root partition, we use separated partitions, with medium size and on each one we can store some copies of <acronym title="State Database">SD</acronym> (copies are calles State Replica). A State Replica takes approximately 4Mo on the disk.
            </p>
            <div class="subChapter">
                <h3>Example</h3>
                <div class="quote">
                    <span class="cmd_line_lvl">#&gt;</span>
                    <span class="cmd_line">metadb -f -c 3 -a c0t0d0s7</span>
                    <br />
                    <span class="cmd_line_lvl">#&gt;</span>
                    <span class="cmd_line">metadb -c 3 -a c0t1d0s7</span>
                    <br />
                    <span class="cmd_line_lvl">#&gt;</span>
                    <span class="cmd_line">metadb</span>
                    <br />
                    <span class="cmd_line">flags first blk block count</span>
                    <br />
                    <span class="cmd_line">...</span>
                    <br />
                    <span class="cmd_line">a u 16 1034 /dev/dsk/c0t0d0s7</span>
                    <br />
                    <span class="cmd_line">a u 1050 1034 /dev/dsk/c0t0d0s7</span>
                    <br />
                    <span class="cmd_line">a u 2084 1034 /dev/dsk/c0t0d0s7</span>
                    <br />
                    <span class="cmd_line">a u 16 1034 /dev/dsk/c0t1d0s7</span>
                    <br />
                    <span class="cmd_line">a u 1050 1034 /dev/dsk/c0t1d0s7</span>
                    <br />
                    <span class="cmd_line">a u 2084 1034 /dev/dsk/c0t1d0s7</span>
                </div>
            </div>
        </div>
        <div class="chapter">
            <h2>Creation of the RAID</h2>
            <p>We create two meta d11 and d12. One of them will contain the existing root partition, the other one the non-initialized partition, gathered in RAID.</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">metainit -f d11 1 1 c0t0d0s0</span>
                <br />
                <span class="cmd_line">d11: Concat/Stripe is setup</span>
                <br />
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">metainit d12 1 1 c0t1d0s0</span>
                <br />
                <span class="cmd_line">d12: Concat/Stripe is setup</span>
            </div>
            <p>We create the mirror d10 (Mirror, equal RAID1), which will only contain one meta d11, that is the real mirror will not yet exist.</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">metainit d10 -m d11</span>
                <br />
                <span class="cmd_line">d10: Mirror is setup</span>
            </div>
            <p>In order to the system has a good reception configuration from the meta, it is necessary to use metaroot which will eventually correct configuration files /etc/vfstab and /etc/system :</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">metaroot d10</span>
            </div>
            <p>After that, it is recommended to use a lockfs : it is a mecanism which the function and application are not described (in Solaris 9 to do similar actions, it is not needed) :</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">lockfs -fa</span>
            </div>
            <p>We restart the system in order to have the RAID Level 1, already working, but without the real mirror yet.</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">init 6</span>
            </div>
            <p>After the system reboot, we plug the second meta to the mirror :</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">metattach d10 d12</span>
                <br />
                <span class="cmd_line">d10: Submirror d12 is attached</span>
            </div>
            <p>The procces which create the RAID 1 start autamatically, and we can see its progression with the metastat. We have to wait this process has ended, thus the system is ready.</p>
        </div>
        <div class="chapter">
            <h2>Preparation of the system at the start of a saved "META"</h2>
            <p>In order to reduce the number of system starts during doings, this step can be done before the creation of the massive.</p>
            <p>To start the saved meta (here c0t1d0), it is needed to know its access full path. With our example :</p>
            <div class="subChapter">
                <span class="cmd_line_lvl">#&gt;</span>
                <span class="cmd_line">ls -l /dev/rdsk/c0t1d0s0</span>
                <br />
                <span class="cmd_line">lrwxrwxrwx 1 root root 55 Mar 5 12:54 /dev/rdsk/c0t1d0s0 -&gt;</span>
                <br />
                <span class="cmd_line">../../devices/sbus@0,f8000000/esp@1,200000/sd@1,0:a,raw</span>
            </div>
            <p>
                <em>The underlined part is which we need</em>
            </p>
            <p>Immediately we have access to OpenBoot (example : during the boot on the step of the creation of the massive), we create the name of this meta and we apply to it the auto-start configuration for the case the master meta will crash :</p>
            <div class="subChapter">
                <span class="cmd_line">ok nvalias second_root /sbus@0,f8000000/esp@1,200000/sd@1,0:a</span>
                <br />
                <span class="cmd_line">ok printenv boot-device</span>
                <br />
                <span class="cmd_line">boot-device = disk net</span>
                <br />
                <span class="cmd_line">ok setenv boot-device disk second_root net</span>
                <br />
                <span class="cmd_line">boot-device = disk second_root net</span>
                <br />
                <span class="cmd_line">ok nvstore</span>
            </div>
            <p>Thus, if the meta c0t0d0 collapse, the system start to boot from c0t1d0</p>
            <p>We can verify the system boot of the saved meta (only after the complete creation of the massive, and after have done metattach and also the creation process of RAID 1) :</p>
            <div class="subChapter">
                <span class="cmd_line">ok boot second_root</span>
            </div>
            <p>If all has been done correctly, the system will boot on the saved meta as well as on the master. The next boot will be done on the master meta.</p>
        </div>
    </body>
</html>

