ITEM: FM3444L

DCE: reconfigure from DCE/DFS cell, maintain users



Response:

 PROBLEM: Customer has run "strings" against the cds checkpoint
file and has found references to old clearinghouses, and old
nodes that are not part of the cell any longer.  He would like
to setup a procedure to save as much as possible of the information
in the CDS (users, servers, etc) and DFS space.  Then he would like
to unconfigure and start from scratch.

He is uncomfortable with this cell as it was migrated from V1.3
to V2.1.

*ACTION TAKEN: Customer will want to save
users
DFS: mount points, LFS filesets.
He would like to setup root.dfs replicated during the rebuild.
He does have access to the scripts from the Redbook.

*ACTION TAKEN: emailing the following to customer

\ is the dce hostname of this system.
\ is the path to the dcetools from the Administration Redbook.
\ is the original cell name of the cell before this operation.

Initially login as AIX root on the system.

1) dcecp -c account catalog -simple > /tmp/users
2) 
        edit /tmp/users to remove any accounts that do not need to
        be stored and recreated.

3) cd \
3.1) dce_login cell_admin
4) get_info_users /tmp/users
5) cd /var/dce/dfs
6)
        For each entry in "dfstab" file that is a JFS fileset (type ufs),
        record the device name (eg. /dev/lv02).

        Also record the fileset name for each of these filesets.
                This can be found using "fts lsfldb"
                or by using "fts lsmount -dir \"
                (if using lsmount, do not record the "\#" symbol)

7) rmdfs -l all
8) rmdce -l all
9) shutdown -Fr
10) mkdce -n \ -h \ sec_srv cds_srv
11) dce_login root
12) mkdfs -e dfs_scm dfs_fldb dfs_srv dfs_repsrv dfs_cl
13) cd /var/dce/dfs
14) cp dfstab dfstab.bak
15)
        edit dfstab to remove any entries for "ufs" filesets
        which should be your JFS filesets that are mounted in DFS space.

16) dfsexport -all
17) dce_login cell_admin
18) fts syncfldb -server /.:/hosts/\
19) exit;exit;dce_login root
20) cd /:

        This should allow you into DFS space.

21) vi /tmp/acl_update

echo "fixing $1"
acl_edit $1 \<\< EOF 2>/dev/null
cell /.:
commit
exit
EOF

22) chmod 755 /tmp/acl_update
23) find . -exec /tmp/acl_update {} \\;

        This is to update the cell uuid for all the LFS objects and
        directories.

24)
        For each entry removed in step 13 you will now need to re-export
        these JFS filesets.  

    mkdfsjfs -d \ -f \

        (eg. mkdfsjfs -d /dev/lv02 -f home.jfs )

25) cd \
26) cp ADD_USER ADD_USER.old
27)
        edit the ADD_USER script to replace lines 95-99
                rgy_edit \<\< EOF >/dev/null 2>&1
                domain principal
                add $princ_name $uid
                quit
                EOF
        with
                dcecp -c principal create $princ_name -uuid $uuid

        This is to allow the add_users to readd the users with the
        same uuid instead of simply using the uid to create a new uuid.
        This allows us to adopt uuids that may be present in acls in
        DFS space.

28) cd dce_users

        This should be the user repository from the scripts

29) 
        In order to tell the Admin scripts that all of these
        users are NEW and should be added to the registry we need
        to change the state to "NEW".
        Run the following script:

\#!/bin/ksh
for i in `ls`
do
sed 's/RGY_ENABLED/NEW/' $i > $i.new
mv $i.new $i
done

        It is possible that the state is SUSPENDED instead of RGY_ENABLED.
        This is a minor change to the above script.

30) cd ..
31) add_users /tmp/users

At this point everything should be back where you started.
Since you mentioned that root.dfs is already an LFS fileset, the only
caveat to replication would be to ensure you create the RW mount point
before you replicate root.dfs:

1) fts crmount /:/.rw root.dfs -rw
2) fts addsite -fileset root.dfs -server \ -aggr \

        The \ should be the same server as the original fileset
        for the first addsite.
        This is considered a "staging" copy of root.dfs and is RO.
        \ would be replaced with a valid aggregate on that server.

PROBLEM:

Customer did find that the cell was setup with -ic and -io
acls for root and cell_admin which have propagated to the rest
of the cell.  These now show up as orphaned uuid acls.  How do
they get rid of these?

*ACTION PLAN: researching, fup 4/22

Response:

Richard,

I have included a script which utilizes the acl_edit command in order
to strip off a given uuid (or 2).  Please try it manually first.

The syntax for using it is

remove.orphan.acl \ \ \

You will need to be logged in as a DCE identity with control permission
on the file.
After you confirm that it works for you, it could be implemented recursively
using

cd /:
find . -exec remove.orphan.acl {} \\;

Above I do this from the root of DFS. However, you would want to do this
from the root of the filesets on the system you run it on.  For example if
SystemA exports a fileset mounted at /:/home then start from there, while
dce_logged in as root on SystemA.  Each system that exports filesets could
do this work, but I would not suggest running these from clients as it could
become messy to choose a DCE identity that has control permission on all the
files recursively.

==================

\#!/bin/ksh
\#This script should be run as DCE root.
FILE=$1
shift
UUID1=$1
UUID2=$2

function remove_acl {
if [ ! -z $UUID1 ]
then
grep -v $UUID1 /tmp/$$.acl > /tmp/$$.acl.1
else
cp /tmp/$$.acl /tmp/$$.acl.1
fi

if [[ ! -z $UUID2 ]]
then
grep -v $UUID2 /tmp/$$.acl.1 > /tmp/$$.acl.2
else
cp /tmp/$$.acl.1 /tmp/$$.acl.2
fi
}
\# modify acls
acl_edit $FILE -l > /tmp/$$.acl
remove_acl
acl_edit $FILE -f /tmp/$$.acl.2

\# modify initial object and initial container if directory
if [ -d $FILE ]
then
acl_edit $FILE -ic -l > /tmp/$$.acl
remove_acl
acl_edit $FILE -ic -f /tmp/$$.acl.2

acl_edit $FILE -io -l > /tmp/$$.acl
remove_acl
acl_edit $FILE -io -f /tmp/$$.acl.2
fi


Support Line: DCE: reconfigure from DCE/DFS cell, maintain users ITEM: FM3444L
Dated: April 1998 Category: N/A
This HTML file was generated 99/06/24~13:30:14
Comments or suggestions? Contact us