Quantcast
Channel: Entreprise content management – Blog dbi services
Viewing all 167 articles
Browse latest View live

systemd configurations for Documentum

$
0
0

systemd configurations for Documentum

systemd has been with us for several years now and has slowly made its way into most Linux distributions. While it has generated much controversy among sysV init hard core, the fact is that it is here to stay and we, Documentum administrators, don’t have our say in this topic. In effect, it does not impact us very much, except that a little translation work is necessary to switch to it, provided that we already went the service way. Most of the time, our custom monolithic script to stop, start and inquiry the status of the several Documentum components can be reused as-is, it is just its invocation that changes. On the other hand, we can take profit of this opportunity to refactor that big script and define separate units for each components. Since systemd lets us define dependencies between components, we can externalize these out of the script, into systemd units. As a result, our stop/start script become slenderer, more readable and easier to maintain. So let’s see how to do all this.

Invocation of the big script

Such a big, monolithic script, let’s call it documentum.sh, is executed by dmadmin and has the typical following layout:

...
start:
launch the docbrokers
start the method server
start the docbases
stop:
shut the docbases down
stop the method server
stop the docbrokers
status:
check the docbrokers
check the method server
check the docbases
...

For simplicity, let’s assume henceforth that we are logged as root when typing all the systemd-related commands below.
To invoke this script from within systemd, let’s create the documentum.service unit:

cat - <<EndOfUnit > /etc/systemd/system/documentum.service
[Unit]
Description=Documentum components controls;
Type=oneshot
RemainAfterExit=yes

ExecStart=sudo -u dmadmin -i /app/dctm/server/dbi/documentum.sh start
ExecStop=sudo -u dmadmin -i /app/dctm/server/dbi/documentum.sh stop
 
[Install]
WantedBy=multi-user.target
EndOfUnit

The clause Type is oneshot because the unit runs commands that terminate, not services.
Unlike real services whose processes keep running after they’ve been started, dm_* scripts terminate after they have done their job, which is to launch some Documentum executables as background processes; thus, RemainAfterExit is needed to tell systemd that the services are still running once started.
ExecStart and ExecStop are obviously the commands to run in order to start, respectively stop the service.
See here for a comprehensive explanation of all the unit’s directives.
Now, activate the service:

systemctl enable documentum.service

This unit has no particular dependencies because all the Documentum-related stuff is self-contained and the needed system dependencies are all available at that point.
On lines 7 and 8, root runs the big script as user dmadmin. Extra care should be taken in the “change user” command so the script is run as dmadmin. It is of paramount importance that sudo be used instead of su. Both are very closely related: a command must be executed as another user, provided the real user has the right to do so (which is the case here because systemd runs as root).

man sudo:
SUDO(8) BSD System Manager's Manual SUDO(8)
 
NAME
sudo, sudoedit — execute a command as another user
 
man su:
SU(1) User Commands SU(1)
 
NAME
su - change user ID or become superuser

However, they behave differently in relation to systemd. With “su – dmadmin -c “, the command gets attached to a session in dmadmin’s slice:

systemd-cgls
 
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
├─user.slice
│ ├─user-618772.slice
│ │ └─session-41.scope
│ │ ├─ 5769 sshd: adm_admin2 [priv
│ │ ├─ 5783 sshd: adm_admin2@pts/
│ │ ├─ 5784 -bash
│ │ ├─11277 systemd-cgls
│ │ └─11278 less
│ └─user-509.slice
│ ├─session-c11.scope
│ │ ├─10988 ./documentum -docbase_name global_registry -security acl -init_file /app/dctm/server/dba/config/global_registry/server.ini
│ └─session-c1.scope
│ ├─6385 ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/server/dba/config/dmtest/server.ini

...

Here, user id 509 is dmadmin. We see that 2 docbase processes are attached to dmadmin’s slice, itself attached to the global user.slice.
With “sudo -u dmadmin -i “, the command gets attached to the system.slice:

system-cgls
 
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
├─user.slice
│ └─user-618772.slice
│ └─session-10.scope
│ ├─4314 sshd: adm_admin2 [priv
│ ├─4589 sshd: adm_admin2@pts/
│ ├─4590 -bash
│ ├─5927 /usr/share/centrifydc/libexec/dzdo service documentum.service start
│ ├─5928 /bin/systemctl start documentum.service
│ ├─5939 /usr/bin/systemd-tty-ask-password-agent --watch
│ ├─5940 /usr/bin/pkttyagent --notify-fd 5 --fallback
│ ├─6219 systemd-cgls
│ └─6220 less
└─system.slice
├─documentum.service
│ ├─5944 /usr/bin/sudo -u dmadmin -i /app/dctm/server/dbi/documentum.sh start
│ ├─5945 /bin/bash /app/dctm/server/dbi/documentum.sh start
│ ├─5975 ./dmdocbroker -port 1489 -init_file /app/dctm/server/dba/Docbroker.ini
│ ├─5991 ./dmdocbroker -port 1491 -init_file /app/dctm/server/dba/Docbrokerdmtest.ini
│ ├─6013 ./documentum -docbase_name global_registry -security acl -init_file /app/dctm/server/dba/config/global_registry/server.ini
│ ├─6023 ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/server/dba/config/dmtest/server.ini
│ ├─6024 sleep 30
│ ├─6055 /app/dctm/server/product/7.3/bin/mthdsvr master 0xfd070016, 0x7fa02da79000, 0x223000 1000712 5 6013 global_registry /app/dctm/server/dba/log
│ ├─6056 /app/dctm/server/product/7.3/bin/mthdsvr master 0xfd070018, 0x7f261269c000, 0x223000 1000713 5 6023 dmtest /app/dctm/server/dba/log

...

Here, user 618772 ran the command “dzdo service documentum.service start” (dzdo is a Centrify command analog to sudo but with privileges checked against Active Directory) to start the documentum.service, which started the command “sudo -u dmadmin -i /app/dctm/server/dbi/documentum start” as defined in the unit and attached its processes under system.slice.
The difference is essential: at shutdown, sessions are closed abruptly, so if a stop/start script is running in it, its stop option will never be invoked.
Processes running under the system.slice on the other hand have their command’s stop option invoked properly so they can cleanly shut down.
This distinction is rarely necessary because generally all the services run as root even though their installation may be owned by some other user. E.g. an apache listening on the default port 80 must run as root. Documentum stuff was not designed to be a service, just background processes running as dmadmin. But thanks to this trick, they can still be managed as services.
At boot time, the unit will be processed and its start commands (there can be many, but here only one for the big script) executed.
It is also possible to invoke the service documentum.service manually:

systemctl start | stop | status documentum.service

The old sysvinit syntax is still available too:

service documentum.service start | stop | status

Thus, everything is in one place and uses a common management interface, which is specially appealing to a system administrator with no particular knowledge of each product installed on each machine under their control, e.g. to become dmadmin and invoke the right dm_* script.
The direct invocation of the unit file is still possible:

/etc/systemd/system/documentum.service start | stop | status

but the service interface is so much simpler.
One remark here: the status clause implemented in the big script above is not the one invoked by the command “systemctl status”:

systemctl status documentum.service
● documentum.service - Documentum Content Server controls for the runtime lifecycle
Loaded: loaded (/etc/systemd/system/documentum.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2018-10-22 14:03:09 CEST; 4min 6s ago
Process: 25388 ExecStop=/bin/su - dmadmin -c /app/dctm/server/dbi/startstop stop (code=exited, status=0/SUCCESS)
Process: 24069 ExecStart=/bin/su - dmadmin -c sh -c 'echo " ** Starting documentum"' (code=exited, status=0/SUCCESS)
Main PID: 924 (code=exited, status=0/SUCCESS)

Instead, the latter just returns the status of the service per se, not of the resources exposed by the service. It is indeed possible to display the current status of those programs in the same output but some special work need to be done for this. Basically, those processes need to periodically push their status to their service by calling systemd-notify; this could be done by a monitoring job for example. See systemd-notify’s man page for more details.
There is no ExecStatus clause in the unit either, although it would make some sense to define a command that asks the service’s processes about its status. We still need some custom script for this.

Splitting the big script

As the full systemd way is chosen, why not introduce a finer service granularity ? To do this, each Documentum component can be extracted from the big script and turned into a service of its own, as illustrated below.
Unit documentum.docbrokers.service

cat - <<EndOfUnit > /etc/systemd/system/documentum.docbrokers.service
[Unit]
Description=Documentum docbrokers controls;
Type=oneshot
RemainAfterExit=yes

# no dependencies;

# there are 2 docbrokers here;
ExecStart=sudo -u dmadmin -i /app/dctm/server/dba/dm_launch_docbroker
ExecStart=sudo -u dmadmin -i /app/dctm/server/dba/dm_launch_docbrokerdmtest
ExecStop=sudo -u dmadmin -i /app/dctm/server/dba/dm_stop_docbroker
ExecStop=sudo -u dmadmin -i /app/dctm/server/dba/dm_stop_docbrokerdmtest
 
[Install]
WantedBy=multi-user.target
EndOfUnit

Now, activate the service:

systemctl enable documentum.docbrokers.service

Lines 10 to 13 call the standard docbroker’s dm_* scripts.

Unit documentum.method-server.service

cat - <<EndOfUnit > /etc/systemd/system/documentum.method-server.service
[Unit]
Description=Documentum method server controls;

After=documentum.docbrokers.service
Requires=documentum.docbrokers.service

Type=oneshot
RemainAfterExit=yes

ExecStart=sudo -u dmadmin -i /app/dctm/server/shared/wildfly9.0.1/server/startMethodServer.sh start
ExecStop=sudo -u dmadmin -i /app/dctm/server/shared/wildfly9.0.1/server/stopMethodServer.sh stop
 
[Install]
WantedBy=multi-user.target
EndOfUnit

Now, activate the service:

systemctl enable documentum.method-server.service

While the dependency with the docbrokers is defined explicitly on line 5 and 6 (see later for an explanation of these clauses), the one with the docbases is a bit ambiguous. Traditionally, the method server is started after the docbases even though, as its name implies, it is a server for the docbases, which are thus its clients. So, logically, it should be started before the docbases, like the docbrokers, and not the other way around. However, the method server executes java code that may use the DfCs and call back into the repository, so the dependency between repositories and method server is two-way. Nevertheless, since it is the docbases that initiate the calls (methods don’t execute spontaneously on the method server), it makes sense to start the method server before the repositories and define a dependency from the latter to the former. This will also simplify the systemd configuration if a passphrase is to be manually typed to start the docbases (see paragraph below).
Lines 11 and 12 call the standard Documentum script for starting and stopping the method server.

Unit documentum.docbases.service

cat - <<EndOfUnit > /etc/systemd/system/documentum.docbases.service
[Unit]
Description=Documentum docbases controls;

After=documentum.docbrokers.service documentum.method-server.service
Requires=documentum.docbrokers.service documentum.method-server.service

Type=oneshot
RemainAfterExit=yes

ExecStart=sudo -u dmadmin -i /app/dctm/server/dba/dm_start_global_registry
ExecStart=sudo -u dmadmin -i /app/dctm/server/dba/dm_start_dmtest
ExecStop=sudo -u dmadmin -i /app/dctm/server/dbi/dm_shutdown_global_registry
ExecStart=sudo -u dmadmin -i /app/dctm/server/dba/dm_shutdown_dmtest
 
[Install]
WantedBy=multi-user.target
EndOfUnit

Now, activate the service:

systemctl enable documentum.docbases.service

Here, the dependencies must be explicitly defined because the docbases need the docbrokers to start. The method server is needed for executing java code requested by the docbases. The After= on line 5 clause says that the current unit documentum.docbases.service waits until the units listed here have been started. The Requires= clause on line 6 says that the current unit documentum.docbases.service cannot start without the other two units so they must all be started successfully, otherwise documentum.docbases.service fails. By default, they start concurrently but the After= clause postpones starting documentum.docbases.service until after the other 2 have started.
Lines 11 to 14 call the standard Documentum script for starting and stopping a docbase.
This alternative does not use the custom script any more but exclusively the ones provided by Documentum; one less thing to maintain at the cost of some loss of flexibility, should any special startup logic be required someday. Thus, don’t bin that big script so quickly, just in case.

Hybrid alternative

The custom monolithic script does everything in one place but lacks the differentiation between components. E.g. the start option starts everything and there is no way to address a single component. An enhanced script, dctm.sh, with the syntax below would be nice:

dctm.sh start|stop|status component

i.e.

dctm.sh start|stop|status docbrokers|docbases|method-server

It could even go as far as differentiating among the repositories and docbrokers:

dctm.sh start|stop|status docbroker:docbroker|docbase:docbase|method-server

A plural keyword syntax could also be used when differentiation is not wanted (or when too lazy to specify the component, or when the component’s exact name is not known/remembered), to collectively address a given type of component:

dctm.sh start|stop|status [--docbrokers|--docbroker docbroker{,docbroker}|--docbases|--docbase docbase{,docbase}|--method-server]

i.e. a list of components can be specified, or all or each of them at once. If none are specified, all of them are addressed. The highlighted target names are keywords while the italicized ones are values. This is a good exercise in parsing command-line parameters, so let’s leave it to to reader !
All these components could be addressed individually either from the corresponding service unit (or from systemd-run, see next paragraph):
Unit documentum.docbrokers.service

...
ExecStart=sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh start --docbrokers
ExecStop=sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh stop --docbrokers
...

Unit documentum.method-server.service

...
ExecStart=sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh start --method-server
ExecStop=sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh stop --method-server
...

Unit documentum.docbases.service

...
ExecStart=sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh start --docbases
ExecStop=sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh stop --docbases
...

As explained above, dctm.sh’s status parameter is not reachable from systemctl but a monitoring agent could put it to good use.
Thus, we have here the granularity of the previous alternative while retaining the flexibility of the monolithic script, e.g. for checking a status (see the next paragraph for another reason to keep the custom script). Each variant has it pros and cons and, as it is often the case, flexibility comes at the cost of complexity.

The case of the missing lockbox passphrase

If a lockbox is in use and a passphrase must be entered interactively by an administrator to start the database, then that service cannot be started by systemd at boot time because at that time the passphrase is still missing from dmadmin’s shared memory. Thus, the docbase start must be delayed until after the passphrase has been loaded. If the service’s start clause is removed and missing, systemd will complain but if we leave it, the start will effectively fail because of the missing lockbox’ passphrase. So, how to exit this dead end ?
A fake start through the clause ExecStart=/bin/true could replace the real start but then how to start the docbases via systemctl once the passphrase has been entered ?
One possible trick is to leave the invocation of the custom script in the service’s start clause but add some logic in that script so it can determine itself how its start clause was invoked. If it was within, say, a 1 minute uptime, then it is obviously an automatic invocation at boot time. The script then aborts and returns false so the service is marked “not started” and can be started manually with no need to first stop it (which would be necessary if it simply returned a 0 exit code). An administrator would then enter the lockbox passphrase, typically with the command below:

sudo -u dmadmin -i dm_crypto_boot -all –passphrase
then, type the passphrase at the prompt

and manually start the service as written above.
A possible implementation of this logic is:

start)):
MAX_BOOT_TIME=60
ut=$(read tot idle < /proc/uptime; echo ${tot%.*})
[ $ut -lt $MAX_BOOT_TIME ] && exit 1

If the service is later stopped and restarted without rebooting, the uptime would be larger than 1 minute and therefore the enhanced custom script dctm.sh (we need this one because only the docbases need to be started, the other components have been already started as services at this point) would do the start itself directly, assuming that the passphrase is now in dmadmin’s shared memory (if it’s not, the start will fail again and the service stay in the same state).
This 1 minute delay can look short but systemd attempts to start as much as possible in parallel, except when dependencies are involved, in which case some serialization is performed. This is another advantage of systemd: a shorter boot time for faster reboots. The fact that most installations run now inside virtual machines makes the reboot even faster. The delay value must not be too large because it is possible that an administrator, who may have done the shutdown themself, is waiting behind their keyboard for the reboot to complete, log in, enter the passphrase and start the service, which will be rejected by the above logic as it considers that it is too soon to do it.

Running a command as a service

systemd makes it possible to run a command as a service, in which case no unit file is necessary. This is an alternative to the missing lockbox passphrase case. An administrator would first load the passphrase in dmadmin’s shared memory and later manually invoke a custom script, with no special logic involving the uptime, as follows:

dzdo systemd-run --unit=dctm.docbases --slice=system.slice --remain-after-exit sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh start --docbases

Such services without unit files are called transient services.
Thus, only the docbrokers and the method server would have their respective unit, while the docbases would be started manually as transient services. The enhanced custom script, dctm.sh, is directly invoked here, not the documentum.docbases.service unit file (there is no need for one any more), with the special command-line argument −−docbases, as discussed in the previous paragraph.
Thanks to the parameter −−slice, the processes with be attached under system.slice and therefore be treated like a service:

├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
├─user.slice
│ └─user-618772.slice
│ └─session-10.scope
│ ├─ 4314 sshd: adm_admin2 [priv
│ ├─ 4589 sshd: adm_admin2@pts/
│ ├─ 4590 -bash
│ ├─15561 systemd-cgls
│ └─15562 systemd-cgls
└─system.slice
├─dctm.docbases.service
│ ├─15347 /usr/bin/sudo -u dmadmin -i /app/dctm/server/dbi/dctm.sh start --docbases
│ ├─15348 /bin/bash /app/dctm/server/dbi/dctm.sh start
│ ├─15378 ./dmdocbroker -port 1489 -init_file /app/dctm/server/dba/Docbroker.ini
│ ├─15395 ./dmdocbroker -port 1491 -init_file /app/dctm/server/dba/Docbrokerdmtest.ini
│ ├─15416 ./documentum -docbase_name global_registry -security acl -init_file /app/dctm/server/dba/config/global_registry/server.ini
│ ├─15426 ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/server/dba/config/dmtest/server.ini

Note how “.service” has been suffixed to the given dynamic unit name dctm.docbases.
The stop and status options are available too for transient services with “systemctl stop|status dctm.docbases.service”.

Useful commands

The following systemd commands can be very useful while troubleshooting and checking the services:

systemctl --all
systemctl list-units --all
systemctl list-units --all --state=active
systemctl list-units --type=service
systemctl list-unit-files
systemctl list-dependencies documentum.docbases.service
systemctl cat documentum.docbases.service
systemctl show documentum.docbases.service
show documentum.docbases.service -p After
show documentum.docbases.service -p Before
systemctl mask ...
systemctl unmask ...
rm -r /etc/systemd/system/bad.service.d
rm /etc/systemd/system/bad.service
 
# don't forget to do this after a unit file has been edited;
systemctl daemon-reload
 
# check the journal, e.g. to verify how the processes are stopped at shutdown and restarted at reboot;
journalctl --merge
journalctl -u documentum.docbases.service
 
# reboot the machine;
/sbin/shutdown -r now

Check systemctl’s man pages for more details.

User services

All the systemd commands can be run as an ordinary user (provided the command-line option −−user is present) and services can be created under a normal account too. The unit files will be stored in the user’s ~/.config/systemd/user directory. The managing interface will be the same; it is even possible to have such user services started automatically at boot time (cf. the lingering option), and stopped at system shut down. Thus, if all we want is a smooth, no brain managing interface for the Documentum processes accessible to the unprivileged product’s administrators as dmadmin, this is a handy feature.

Conclusion

Configuring systemd-style services for Documentum is not such a big a deal once we have a clear idea of what we want.
The main advantage to go the service way is to benefit from a uniform management interface so that any administrator, even without knowledge of the product, can start it, inquiry its status, and stop it. When a passphrase to be entered interactively is in use, there is no real advantage to use a service, except to have the guarantee that its stop sequence will be invoked at shutdown so the repository will be in a consistent state at the end. Actually, for Documentum, especially in the passphrase case, going the service way or staying with a custom script or the standard dm_* scripts is more a matter of IT policies rather than a technical incentive, i.e. the final decision will be more procedural than technical. Nevertheless, having a services’ standard management interface, while still keeping custom scripts for more complicate logic, can be very convenient.

Cet article systemd configurations for Documentum est apparu en premier sur Blog dbi services.


SharePoint Application Server Role, Web Server IIS Role Error

$
0
0

Bypassing SharePoint Server 2013 Prerequisites Installation Error On Windows Server 2016

 

SYMPTOMS

Before running the setup of SharePoint 2013 on Windows Server 2016, the prerequisites as the application server role and the web server role have to be installed and during that process, the following error message appears:

Prerequisite Installation Error

ROOT CAUSE

This error occurs when one or more of the following conditions is true:

  • The product preparation tool does not progress past the configuring application server role, web server role stage.
  • The product preparation tool may be unable to configure and install properly the required windows features for SharePoint.
  • The Application Server Role has been deprecated from Windows Server 2016

WORKAROUND

To workaround this issue, please follow this step:

Method:

Install the following software:

Copy and paste the Windows Server AppFabric software (do not install it) on the C drive and run the following PowerShell command:

C:\>.\WindowsServerAppFabricSetup_x64.exe /i CacheClient,CachingService,CacheAdmin /gac

When the installation is done, reboot the server and install the AppFabric cumulative update Server App Fabric CU and reboot again the Windows Server.

Run the setup.exe from the .iso file to complete the installation wizard.

 

 

 

 

 

 

 

 

 

Cet article SharePoint Application Server Role, Web Server IIS Role Error est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 1 – Change Docbase ID

$
0
0

This blog is the first one of a series that I will publish in the next few days/weeks regarding how to change a Docbase ID, Docbase name, aso in Documentum CS.
So, let’s dig in with the first one: Docbase ID. I did it on Documentum CS 16.4 with Oracle database on a freshly installed docbase.

We will be interested by the docbase repo1, to change the docbase ID from 101066 (18aca) to 101077 (18ad5).

1. Migration tool overview and preparation

The tool we will use here is MigrationUtil, and the concerned folder is:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil
total 108
-rwxr-xr-x 1 dmadmin dmadmin 99513 Oct 28 23:55 MigrationUtil.jar
-rwxr-xr-x 1 dmadmin dmadmin   156 Jan 19 11:09 MigrationUtil.sh
-rwxr-xr-x 1 dmadmin dmadmin  2033 Jan 19 11:15 config.xml

The default content of MigrationUtil.sh:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration. It was my case, I had to add the oracle driver path to the $CLASSPATH, because I received the below error:

...
ERROR...oracle.jdbc.driver.OracleDriver
ERROR...Database connection failed.
Skipping changes for docbase: repo1

To make the blog more readable, I will not show you all the contents of config.xml, below is the updated version to change the Docbase ID:

...
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repo1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

<entry key="ChangeDocbaseID">yes</entry> <!-- To change docbase ID or not -->
<entry key="Docbase_name">repo1</entry> <!-- has to match with DocbaseName.1 -->
<entry key="NewDocbaseID">101077</entry> <!-- New docbase ID -->
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

Before you start the migration script, you have to adapt the maximum open cursors in the database. In my case, with a freshly installed docbase, I had to set open_cursors value to 1000 (instead of 300):

alter system set open_cursors = 1000

See with your DB Administrator before any change.

Otherwise, I got below error:

...
Changing Docbase ID...
Database owner password is read from config.xml
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
	at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1150)
	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
	at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4875)
	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
	at SQLUtilHelper.setSQL(SQLUtilHelper.java:129)
	at SQLUtilHelper.processColumns(SQLUtilHelper.java:543)
	at SQLUtilHelper.processTables(SQLUtilHelper.java:478)
	at SQLUtilHelper.updateDocbaseId(SQLUtilHelper.java:333)
	at DocbaseIDUtil.(DocbaseIDUtil.java:61)
	at MigrationUtil.main(MigrationUtil.java:25)
...

2. Before the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repo1
Docbase id          : 101066
Docbase description : repo1 repository
...

Create a document in the docbase
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.txt

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.txt',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.txt' with CONTENT_FORMAT= 'msww';

Result:

object_created  
----------------
09018aca8000111b
(1 row affected)

note the r_object_id

3. Execute the migration

Before you execute the migration you have to stop the docbase and the docbroker.

$DOCUMENTUM/dba/dm_shutdown_repo1
$DOCUMENTUM/dba/dm_stop_DocBroker

Now, you can execute the migration script:

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseIdChange.log
Changing Docbase ID...
Database owner password is read from config.xml
Finished changing Docbase ID...

Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...
Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

No Error, sounds good ;) All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseIdChange.log
Reading config.xml from path: config.xmlReading server.ini parameters

Retrieving server.ini path for docbase: repo1
Found path: /app/dctm/product/16.4/dba/config/repo1/server.ini
Set the following properties:

Docbase Name:repo1
Docbase ID:101066
New Docbase ID:101077
DBMS: oracle
DatabaseName: DCTMDB
SchemaOwner: repo1
ServerName: vmtestdctm01
PortNumber: 1521
DatabaseOwner: repo1
-------- Oracle JDBC Connection Testing ------
jdbc:oracle:thin:@vmtestdctm01:1521:DCTMDB
Connected to database
Utility is going to modify Objects with new docbase ID
Sun Jan 27 19:08:58 CET 2019
-----------------------------------------------------------
Processing tables containing r_object_id column
-----------------------------------------------------------
-------- Oracle JDBC Connection Testing ------
jdbc:oracle:thin:@vmtestdctm01:1521:DCTMDB
Connected to database
...
...
-----------------------------------------------------------
Update the object IDs of the Table: DMC_ACT_GROUP_INSTANCE_R with new docbase ID:18ad5
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------

Processing ID columns in each documentum table

Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID

Processing ID columns in each documentum table

Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID
...
...
-----------------------------------------------------------
Update the object IDs of the Table: DM_XML_ZONE_S with new docbase ID:18ad5
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------
Processing ID columns in each documentum table
Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID
-----------------------------------------------------------
Updating r_docbase_id of dm_docbase_config_s and dm_docbaseid_map_s...
update dm_docbase_config_s set r_docbase_id = 101077 where r_docbase_id = 101066
update dm_docbaseid_map_s set r_docbase_id = 101077 where r_docbase_id = 101066
Finished updating database values...
-----------------------------------------------------------
-----------------------------------------------------------
Updating the new DocbaseID value in dmi_vstamp_s table
...
...
Updating Data folder...
select file_system_path from dm_location_s where r_object_id in (select r_object_id from dm_sysobject_s where r_object_type = 'dm_location' and object_name in (select root from dm_filestore_s))
Renamed '/app/dctm/product/16.4/data/repo1/replica_content_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/replica_content_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/replicate_temp_store/00018aca' to '/app/dctm/product/16.4/data/repo1/replicate_temp_store/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/streaming_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/streaming_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/content_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/content_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/thumbnail_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/thumbnail_storage_01/00018ad5
select file_system_path from dm_location_s where r_object_id in (select r_object_id from dm_sysobject_s where r_object_type = 'dm_location' and object_name in (select log_location from dm_server_config_s))
Renamed '/app/dctm/product/16.4/dba/log/00018aca' to '/app/dctm/product/16.4/dba/log/00018ad5
select r_object_id from dm_ldap_config_s
Finished updating folders...
-----------------------------------------------------------
-----------------------------------------------------------
Updating the server.ini with new docbase ID
-----------------------------------------------------------
Retrieving server.ini path for docbase: repo1
Found path: /app/dctm/product/16.4/dba/config/repo1/server.ini
Backed up '/app/dctm/product/16.4/dba/config/repo1/server.ini' to '/app/dctm/product/16.4/dba/config/repo1/server.ini_docbaseid_backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repo1/server.ini
Docbase ID Migration Utility completed!!!
Sun Jan 27 19:09:52 CET 2019

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repo1

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repo1
Docbase id          : 101077
Docbase description : repo1 repository
...

Check the document created before the migration:
Adapt the r_object_id with the new docbase id : 09018ad58000111b

API> dump,c,09018ad58000111b    
...
USER ATTRIBUTES
  object_name                     : DCTMChangeDocbaseExample.txt
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...
  r_object_id                     : 09018ad58000111b
...
  i_folder_id                  [0]: 0c018ad580000105
  i_contents_id                   : 06018ad58000050c
  i_cabinet_id                    : 0c018ad580000105
  i_antecedent_id                 : 0000000000000000
  i_chronicle_id                  : 09018ad58000111b

5. Conclusion

After a lot of tests on my VMs, I can say that changing docbase id is reliable on a freshly installed docbase. On the other hand, each time I tried it on a “used” Docbase, I got errors like:

Changing Docbase ID...
Database owner password is read from config.xml
java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (GREPO5.D_1F00272480000139) violated

	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
	at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1150)
	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
	at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4875)
	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
	at SQLUtilHelper.setSQL(SQLUtilHelper.java:129)
	at SQLUtilHelper.processColumns(SQLUtilHelper.java:543)
	at SQLUtilHelper.processTables(SQLUtilHelper.java:478)
	at SQLUtilHelper.updateDocbaseId(SQLUtilHelper.java:333)
	at DocbaseIDUtil.(DocbaseIDUtil.java:61)
	at MigrationUtil.main(MigrationUtil.java:25)

I didn’t investigate enough on above error, it deserves more time but it wasn’t my priority. Anyway, the tool made a correct rollback.

Now, it is your turn to practice, don’t hesitate to comment this blog to share your own experience and opinion :)
In the next blog, I will try to change the docbase name.

Cet article Documentum – MigrationUtil – 1 – Change Docbase ID est apparu en premier sur Blog dbi services.

How To Deploy Office Web Apps Server 2013

$
0
0

The 4 Steps Of Office Web Apps Server 2013 Installation

Office Web Apps provides browser-based versions of Excel, One Note, Word and PowerPoint. It also helps users who access files through SharePoint 2013.

The objective of this topic is to define the steps to install office web apps 2013, create the farm and the binding so that it can be used within SharePoint 2013 test environment.

For this example, we have the following systems in place:
  • Windows Server 2012 r2
  • SharePoint Server 2013

1) Install Server roles, features & Role services

Server roles:

  • Web server

Features:

  • Ink and Handwriting services

Role services:

  • Dynamic Content Compression
  • Windows Authentication
  • .Net Extensibility 4.5
  • ASP.Net 4.5
  • Server Side Includes

Restart the server.

Note that if your installation is done on Windows Server 2016, the feature “Ink and Handwriting services” is now a default part of the server and no longer requires a separate package.

2) Install Office Web Apps

Launch the setup from the DVD file and wait until the installation is finished.

3) Create Office Web Apps Farm

1) Specify the internal URL for the server name
2) Use administrative privileges
3) run the Power Shell command “New-OfficeWebAppsFarm -InternalURL http://servername -AllowHttp -EditingEnabled”

This command allows HTTP as it is internal and the function enable editing to allow users to edit documents.

To verify that the farm is successfully created, type in the browser the URL “http://servername/hosting/delivery”.

4) Bind Office Web Apps and SharePoint

The communication between both sides still need to be done through HTTP protocol.

1) Use administrative privileges
2) Switch over SharePoint management shell
3) Run the command “New-SPWOPIBinding -ServerName servername -AllowHTTP”

The command should return that HTTP protocol is used internally and a list of bindings.

Check SharePoint default internal zone:

Get-SPWOPIZone

If it is HTTPS, change it into HTTP:

Set-SPWOPIZone -Zone internal-http

Set the authentication OAuth over HTTP to true:

  • $config = (Get-SPSecurityTokenServiceConfig)
  • $config.AllowOAuthOverHttp = $true
  • $config.update()

SharePoint can now use Office Web Apps.

To avoid errors, few points need to be verify before testing Office Web apps within SharePoint:

a) Check SharePoint authentication mode (claims-based and not classic) using PowerShell:

  • $WebApp=”http://webapp/”
  • (Get-SPWebApplication $WebAppURL).UseClaimsAuthentication

b) Check that the login account is not a system account but a testing account.

c) Enabling editing Office Web Apps, if it is false, set it to true using the PowerShell command:

  • Set-OfficeWebAppsFarm -EditingEnabled:$true

d) Check that Office Web Apps has enough memory

Need help, more details can be found on here.

Cet article How To Deploy Office Web Apps Server 2013 est apparu en premier sur Blog dbi services.

Documentum – Process Builder Installation Fails

$
0
0

A couple of weeks ago, at a customer I received an incident from the application team regarding an error occurred when installing Process Builder. The error message was:
The Process Engine license has not been enabled or is invalid in the ‘RADEV’ repository.
The Process Engine license must be enabled to use the Process Builder.
Please see your system administrator
.”

The error appears when selecting the repository:

Before I investigate on this incident I had to learn more about the Process Builder as it is usually managed by the application team.
In fact, The Documentum Process Builder is a software for creating a business process templates, used to formalize the steps required to complete a business process such as an approval process, so the goal is to extend the basic functionality of Documentum Workflow Manager.
It is a client application that can be installed on any computer, but before installing Process Builder you need to prepare your content server and repository by installing the Process Engine, because the CS handle the check in, check out, versioning, archiving, and all processes created are saved in the repository… Hummm, so maybe the issue is that my content server or repository is not well configured?

To be clean from the client side, I asked the application team to confirm the docbroker and port configured in C:\Documentum\Config\dfc.properties.

From the Content Server side, we used Process Engine installer, which install the Process Engine on all repositories that are served by the Content Server, deploy the bpm.ear file on Java Method Server and install the DAR files on each repository.

So let’s check the installation:

1. The BPM url http://Server:9080/bpm/modules.jsp is reachable:

2. No error in the bpm log file $JBOSS_HOME/server/DctmServer_MethodServer/logs/bpm-runtime.log.

3. BPM and XCP DARs are correctly installed in the repository:

select r_object_id, object_name, r_creation_date from dmc_dar where object_name in ('BPM', 'xcp');
080f42a480026d98 BPM 8/29/2018 10:43:35
080f42a48002697d xcp 8/29/2018 10:42:11

4. The Process Engine module is missed in the docbase configuration:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3

We know the root cause of this incident now :D
To resolve the issue, add the Process Engine module to the docbase config:

API>fetch,c,docbaseconfig
API>append,c,l,r_module_name
Process Engine
API>append,c,l,r_module_mode
3
API>save,c,l

Check after update:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
								[5]: Process Engine
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3
								[5]: 3
		...

Then I asked the application team to retry the installation, the issue has been resolved.

No manual docbase configuration required in the Process Engine Installation Guide. I guess the Process Engine Installer should do it automatically.
I will install a new environment in the next few days/weeks, and keep you informed if any news ;)

Cet article Documentum – Process Builder Installation Fails est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 2 – Change Docbase Name

$
0
0

You are attending the second episode of the MigrationUtil series, today we will change the Docbase Name. If you missed the first one, you can find it here. I did this change on Documentum CS 16.4 with Oracle database, on the same docbase I already used to change the docbase ID.
My goal is to do both changes on the same docbase because that’s what I will need in the future.

So, we will be interested in the docbase RepoTemplate to change his name to repository1.

1. Migration preparation

I will not give the overview of the MigrationUtil, as I already did in the previous blog.
1.a Update the config.xml file
Below is the updated version of config.xml file to change the Docbase Name:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/product/16.4/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">RepoTemplate</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>
...
<entry key="ChangeDocbaseName">yes</entry>
<entry key="NewDocbaseName.1">repository1</entry>
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Before the migration (optional)

– Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : RepoTemplate
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

– Create a document in the docbase:
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.docx

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.docx',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.docx' with CONTENT_FORMAT= 'msw12';

Result:

object_created  
----------------
090f449880001125
(1 row affected)

note the r_object_id.

3. Execute the migration

3.a Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

3.b Update the database name in the server.ini file
It is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/RepoTemplate/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = RepoTemplate
server_config_name = RepoTemplate
database_conn = dctmdb.local
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/RepoTemplate/dbpasswd.txt
service = RepoTemplate
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin
...

Don’t worry, we will roll back this change before docbase start ;)

3.c Execute the MigrationUtil script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...
Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...

Changing Docbase Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Finished changing Docbase Name...

Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

No Error encountred here but it doesn’t mean that everything is ok… Please check the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Start: 2019-02-01 19:32:10.631
Changing Docbase Name
=====================

DocbaseName: RepoTemplate
New DocbaseName: repository1
Retrieving server.ini path for docbase: RepoTemplate
Found path: /app/dctm/product/16.4/dba/config/RepoTemplate/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_docbase_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3c0f449880000103'
select r_object_id,docbase_name from dm_docbaseid_map_s where docbase_name = 'RepoTemplate'
update dm_docbaseid_map_s set docbase_name = 'repository1' where r_object_id = '440f449880000100'
select r_object_id,file_system_path from dm_location_s where file_system_path like '%RepoTemplate%'
update dm_location_s set file_system_path = '/app/dctm/product/16.4/data/repository1/content_storage_01' where r_object_id = '3a0f44988000013f'
...
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800003e0'
...
select i_stamp from dmi_vstamp_s where i_application = 'dmi_dd_attr_info'
...
Successfully updated database values...
...
Backed up '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_startup script.
Renamed '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_start_repository1'
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_shutdown script.
Renamed '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_shutdown_repository1'
WARNING...File /app/dctm/product/16.4/dba/config/RepoTemplate/rkm_config.ini doesn't exist. RKM is not configured
Finished processing File changes...

Processing Directory Changes...
Renamed '/app/dctm/product/16.4/data/RepoTemplate' to '/app/dctm/product/16.4/data/repository1'
Renamed '/app/dctm/product/16.4/dba/config/RepoTemplate' to '/app/dctm/product/16.4/dba/config/repository1'
Renamed '/app/dctm/product/16.4/dba/auth/RepoTemplate' to '/app/dctm/product/16.4/dba/auth/repository1'
Renamed '/app/dctm/product/16.4/share/temp/replicate/RepoTemplate' to '/app/dctm/product/16.4/share/temp/replicate/repository1'
Renamed '/app/dctm/product/16.4/share/temp/ldif/RepoTemplate' to '/app/dctm/product/16.4/share/temp/ldif/repository1'
Renamed '/app/dctm/product/16.4/server_uninstall/delete_db/RepoTemplate' to '/app/dctm/product/16.4/server_uninstall/delete_db/repository1'
Finished processing Directory Changes...
...
Processing Services File Changes...
Backed up '/etc/services' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/services_docbase_RepoTemplate.backup'
ERROR...Couldn't update file: /etc/services (Permission denied)
ERROR...Please update services file '/etc/services' manually with root account
Finished changing docbase name 'RepoTemplate'

Finished changing docbase name....
End: 2019-02-01 19:32:23.791

Here it is a justified error… Let’s change the service name manually.

3.d Change the service
As root, change the service name:

[root@vmtestdctm01 ~]$ vi /etc/services
...
repository1				49402/tcp               # DCTM repository native connection
repository1_s       	49403/tcp               # DCTM repository secure connection

3.e Change back the Database name in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

3.f Start the Docbroker and the Docbase

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

3.g Check the docbase log

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-01T19:43:15.677455	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 16594, session 010f449880000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:15.677967	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16595, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:16.680391	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16606, session 010f44988000000b) is started sucessfully." 

You are saying the log name is still RepoTemplate.log ;) Yes! because in my case the docbase name and the server name were the same before I change the docbase name:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/repository1/dbpasswd.txt
service = repository1
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin

Be patient, in the next episode we will see how we can change the server name :)

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

it’s not very nice to keep the old description of the docbase… Use below idql request to change it:

Update dm_docbase_config object set title='Renamed Repository' where object_name='repository1';

Check after change:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
...
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Renamed Repository
...

Check the document created before the migration:
docbase id : 090f449880001125

API> dump,c,090f449880001125
...
USER ATTRIBUTES

  object_name                     : DCTMChangeDocbaseExample.docx
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...

5. Conclusion

Well, the tool works, but as you saw we need a workaround to make the change. Which is not great, hope that it will be fixed in the future versions.
In the next episode I will change the server config name, see you there ;)

Cet article Documentum – MigrationUtil – 2 – Change Docbase Name est apparu en premier sur Blog dbi services.

A few scripting languages for Documentum

$
0
0

Beside the obsolete dmbasic, the autistic dmawk, the verbose java with the DfCs, the limited iapi (for API) and idql (for DQL) command-line tools, Documentum does not offer any scripting language for the administrator and the out-of-the-box experience is quite frustrating in this respect. It has been so even before the java trend so it is not a maneuver to force the use of the DfCs or to rely on it for administrative tasks. It looks more like an oversight or like this was considered as a low-priority need.
Of course, people didn’t stop at this situation and developed their own bindings with their preferred scripting language. I found db:Documentum for perl, yours truly’s DctmAPI.py for python (refer to the article here), dmgawk for gawk (see here), and of course all the JVM-based languages that leverage the DfCs such as groovy, beanshell, jython, jruby, etc… Such JVM-based scripting languages actually only need to import the DfCs library and off they go for the next killer script. In this article, I’ll show how to set up the binding for a few of those languages under the linux O/S.

db::Documentum

This is a perl v5 module that permits access to the Documentum api from the perl interpreter. It was developed by M. Scott Roth, see here, originally only for the Windows O/S and EDMS v3.1.5. Thanks to other contributors, it is now compilable under several flavors of Unices, including Linux. It is downloadable from here.
You’ll need the GNU C compiler to generate the module. Here is a detailed, step by step description of the installation procedure.

# download the archive Db-Documentum-1.64.tar.gz from here http://www.perl.com/CPAN/modules/by-module/Db/
# decompress it in, say, db::Documentum
tar -zxvf Db-Documentum-1.64.tar.gz
 
# move to the newly created directory Db-Documentum-1.64;
cd Db-Documentum-1.64
 
# prepare the following needed paths;
# DM_HOME
# path to the Documentum installation, e.g. /home/dmadmin/documentum
# DM_LIB
# path to the Documentum libdmcl.so library, e.g. ${DM_HOME}/product/7.3/bin
# note: there is also the obsolescent libdmcl40.so but I've encountered problems with it, mostly "Segmentation fault (core dumped)" crashes, so use the JNI-based libdmcl.so instead; it starts more slowly as it uses java but it is more reliable and is still supported;
# DM_INCLUDE
# path to the include file dmapp.h, e.g. ${DM_HOME}/share/sdk/include
 
# edit the linux section in Makefile.PL and provide the above paths;
# also, move up the $DM_LIB initialization before the dmcl.so test and comment the line beginning with $DM_CLIENT_LIBS =
# here is how that section looks like after editing it:

elsif ( $OS =~ /linux/i ) {
 
# The path to your Documentum client installation.
$DM_HOME = '/home/dmadmin/documentum';
 
# This is kinda a gottcha, the Linux stuff is in unix/linux
# You may have to tweak these.
 
# Path to documentum client libraries.
#$DM_LIB = "$DM_HOME/unix/linux";
$DM_LIB = "$DM_HOME/product/7.3/bin";
 
# dmcl.so file
if (! -e "$DM_LIB/libdmcl.so") {
warn "\n*** WARNING *** Could not find $DM_LIB/libdmcl.so.\nThe module will not make without " .
"libdmcl.so.\n";
}
 
# Path to directory where dmapp.h lives.
#$DM_INCLUDE = "-I/documentum/share/sdk/include/";
$DM_INCLUDE = "-I$DM_HOME/share/sdk/include/";
 
#$DM_CLIENT_LIBS = "-L$DM_LIB -ldmapi -ldmupper -ldmlower -ldmcommon -ldmupper -lcompat";
$DM_RPC_LIBS = "-L$DM_LIB -lnwrpc -lnwstcp";
$OS_LIBS = "-lsocket -lnsl -lintl";
$CPP_LIBS = "-lC";
$LD_LIBS = "-ldl";
$CPP_INC = "";
$CCFLAGS = "";
}

 
# execute the Makefile.PL;
perl Makefile.PL
 
# if the error below occurs, you must install the perl-devel module using the native package deployment tool for your distribution,
# e.g. sudo yum install perl-devel for centos:
# Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
#BEGIN failed--compilation aborted at Makefile.PL line 1.
 
# a Makefile file has been generated; correct the 2 lines below as showed;
EXTRALIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl
LDLOADLIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl -lc
 
# use the newly produced Makefile;
make
 
# run some tests to check the new module;
make test
 
# the test completes successfully but, sometimes, it is followed by SIGSEGV in the JVM;
# as it occurs at program termination, it can be ignored;
 
# install the new perl module system-wide;
sudo make install

Now that we have the module, let’s use it in a simple test case: dump of all the dm_sysobject linked in cabinet /dmadmin (its id is 0c00c35080000105) in the out-of-the box dmtest repository.

vi cat my-test.pl 
#!/usr/bin/perl

use Db::Documentum qw(:all);
use Db::Documentum::Tools qw(:all);

# print version
Db::Documentum::version;

$docbase = "dmtest";
$username = "dmadmin";
$password = "dmadmin";

# connect;
$result = dm_Connect($docbase, $username, $password) || die("could not connect in " . $docbase . " as " . $username . " with password " . $password);

# run the query;
$status = dmAPIExec("execquery,c,,select r_object_id, r_object_type, object_name from dm_sysobject where folder(ID('0c00c35080000105'))");
if (1 != $status) {
   $err_mess = dmAPIGet("getmessage,c");
   print $err_mess;
   die();
}
$query_id = dmAPIGet("getlastcoll,c");
printf "%-16s  %-20s  %s\n", "r_object_id", "r_object_type", "object_name";
while (dmAPIExec("next,c," . $query_id)) {
   $r_object_id = dmAPIGet("get,c," . $query_id . ",r_object_id");
   $r_object_type = dmAPIGet("get,c," . $query_id . ",r_object_type");
   $object_name = dmAPIGet("get,c," . $query_id . ",object_name");
   printf "%16s  %-20s  %s\n", $r_object_id, $r_object_type, $object_name;
}
dmAPIExec("close,c," . $query_id);

# disconnect;
dmAPIExec("disconnect,c");
exit;

The script is very trivial and needs little explanation. Note the new functions dm_Connect, dmAPIExec and dmAPIGet. dmAPISet, dmAPIInit and dmAPIDeInit are also available but the last two don’t need to be invoked explicitly for they are automatically at module load-time.
Example of execution:

perl my-test.pl
 
Perl version: 5.016003
Db::Documentum version: 1.64
DMCL version: 7.3.0000.0205
 
r_object_id r_object_type object_name
0800c3508000019b dm_job dm_PropagateClientRights
0800c3508000019c dm_job dm_PostUpgradeAction
0800c35080000408 dmc_wfsd_type_info integer
0800c35080000409 dmc_wfsd_type_info boolean
0800c3508000040a dmc_wfsd_type_info double
0800c3508000040b dmc_wfsd_type_info string
0800c3508000040c dmc_wfsd_type_info date
0800c3508000040d dmc_wfsd_type_info repeating integer
0800c3508000040e dmc_wfsd_type_info repeating boolean
0800c3508000040f dmc_wfsd_type_info repeating double
0800c35080000410 dmc_wfsd_type_info repeating string
0800c35080000411 dmc_wfsd_type_info repeating date
0800c35080000426 dm_sysobject dm_indexAgentAcquireLock
0800c35080000587 dm_client_rights dfc_localhost_c0XP4a
0800c35080001065 dm_jms_config JMS dmtest:9080 for dmtest.dmtest
0800c35080001066 dm_jms_config JMS localhost.localdomain:9080 for dmtest.dmtest
0b00c35080000233 dm_folder Workspace Customizations
1700c3508000015d dm_outputdevice Default LP Printer
3a00c3508000013f dm_location storage_01
3a00c35080000140 dm_location common
3a00c35080000141 dm_location events
3a00c35080000142 dm_location log
3a00c35080000143 dm_location config
3a00c35080000144 dm_location dm_dba
3a00c35080000145 dm_location auth_plugin
3a00c35080000146 dm_location ldapcertdb_loc
3a00c35080000147 dm_location temp
3a00c35080000148 dm_location dm_ca_store_fetch_location
3a00c35080000153 dm_location convert
3a00c35080000154 dm_location dsearch
3a00c35080000155 dm_location nls_chartrans
3a00c35080000156 dm_location check_signature
3a00c35080000157 dm_location validate_user
3a00c35080000158 dm_location assume_user
3a00c35080000159 dm_location secure_common_area_writer
3a00c3508000015a dm_location change_password_local
3a00c3508000015b dm_location thumbnail_storage_01
3a00c3508000015c dm_location streaming_storage_01
3a00c35080000226 dm_location replicate_location
3a00c35080000227 dm_location replica_storage_01
3e00c35080000149 dm_mount_point share
6700c35080000100 dm_plugin CSEC Plugin
6700c35080000101 dm_plugin Snaplock Connector

Now, the power of perl and its more than 25’000 modules are at our fingertips to help us tackle those hairy administrative tasks !

groovy

Being a JVM-based language, groovy runs on top of a JVM, and therefore benefits of all its advantages such as automatic garbage collection (although this is not an exclusivity of java) and portability (ditto), and can tap into the uncountable existing JAVA libraries (ditto).
groovy is used in Documentum’s xPlore.
groovy is a powerful, yet easy to learn, programing language still actively maintained by the Apache foundation, cf. here. Similar to java but without its verbosity, it should instantly appeal to java developers doing Documentum administrative tasks.
groovy comes with groovysh, a comfortable and powerful interactive shell for trying out statements and experimenting with the language.
By importing the DfCs, we can use groovy to access Documentum repositories. Knowledge of the DfCs are required of course.
To install groovy, use the distribution’s package manager; e.g. on my Ubuntu, I’ve used:

sudo apt-get install groovy

while on Centos, the following command will do it:

sudo yum install groovy

To test it, let’s use the same program as for perl, but rewritten a la groovy:

#! /usr/bin/groovy

import System.*;
import java.io.*;

import com.documentum.fc.client.*;
import com.documentum.fc.common.*;

   static void main(String[] args) {
      docbroker = "dmtest";
      docbase = " dmtest";
      username = "dmadmin";
      password = "dmadmin";
   
      println("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker);
   
      try {
         client = DfClient.getLocalClient();
      
         config = client.getClientConfig();
         config.setString ("primary_host", docbroker);
      
         logInfo = new DfLoginInfo();
         logInfo.setUser(username);
         logInfo.setPassword(password);
         docbase_session = client.newSession(docbase, logInfo);
      
         if (docbase_session != null) {
            println("Got a session");
      
            // do something in the session;
            folderId = new DfId("0c00c35080000105");
            folder = docbase_session.getObject(folderId);
            attrList = "r_object_id,r_object_type,object_name";
            coll = folder.getContents(attrList);
      
            while (coll.next())
               System.out.printf("ObjectId=%-16s r_object_type=%-20s ObjectName=%s\n",
                                 coll.getString("r_object_id"),
                                 coll.getString("r_object_type"),
                                 coll.getString("object_name"));
            println("Finished");
            docbase_session.disconnect();
         }
         else
            println("Didn't get a session");
      }
      catch (e) {
         println("Exception was: " + e);
      }
   }

Lines 6 & 7 import the DfC so don’t forget to add them to the CLASSPATH; normally they are because the execution environment is a Documentum client, e.g.:

export JAVA_HOME=/home/dmadmin/documentum/shared/java64/1.8.0_77
export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar
export PATH=$JAVA_HOME/bin:$PATH

Line 15 & 38 show that besides its own built-in functions, groovy can also use equivalent functions from the java libraries.
To invoke the script:

groovy tgroovy.gry
# or make it executable and call it:
chmod +x tgroovy.gry
./tgroovy.gry

Here is its output:

attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
...
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector
Finished

jython

jython is a python implementation written in java, see here.
A such, it offers most of the features of the powerful python language although it stays behind the latest python version (v2.5.3 vs. 3.7).
Like java, groovy, jruby, scala, etc …, jython runs inside a JVM and can use all the java libraries such as the DfCs and become a Documentum client with no changes except adding the DfCs to the $CLASSPATH.
jython appeals especially to people who already know python; like for groovy, a basic level knowledge of the DfCs is required.
To install jython, use your distribution’s package manager, e.g.

# for ubuntu:
sudo apt-get install jython

Make sure the DfCs are present in $CLASSPATH, otherwise add them:

export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:

When runing the test script below, the DfCs complain about a bad date format:

20:42:05,482 ERROR [File Watcher] com.documentum.fc.common.DfPreferences - [DFC_PREFERENCE_BAD_VALUE] Bad value for preference "dfc.date_format", value="M/d/yyyy, h:mm:ss a"
com.documentum.fc.common.DfException: Illegal syntax found in the date format 'M/d/yyyy, h:mm:ss a'. The default localized short date format will be used.
at com.documentum.fc.common.DfException.newIllegalDateFormatException(DfException.java:109)

Evidently, they are unhappy with the default date format. The work-around is to force one in the dfc.properties file by adding the line below (choose a date format that best suits you):

dfc.date_format=dd.MM.yyyy HH:mm:ss

Since the error did not occur with groovy (nor in the provided JNI-enabled command-line tools such as iapi, idql and dmawk), it is not the DfCs per se that have this problem but the combination of java + jython + DfCs.
Here comes the test script:

#!/usr/bin/env jython

# install jython via your O/S package manager;
# export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:$CLASSPATH
# edit the documentum/shared/config/dfc.properties and add a dfc.date property, e.g.:
# dfc.date_format=dd.MM.yyyy HH:mm:ss
# execute:
#   jython test.jy
# or:
#   chmod +x test.jy; ./test.jy
# can also be execute interactively as follows:
# start jython:
#    jython
# call the test script;
#    execfile("/home/dmadmin/test.jy")

import traceback
import com.documentum.fc.client as DFCClient
import com.documentum.fc.common as DFCCommon

docbroker = "dmtest"
docbase = " dmtest"
username = "dmadmin"
password = "dmadmin"
print("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker)
try:
  client = DFCClient.DfClient.getLocalClient()

  config = client.getClientConfig()
  config.setString ("primary_host", docbroker)

  logInfo = DFCCommon.DfLoginInfo()
  logInfo.setUser(username)
  logInfo.setPassword(password)
  docbase_session = client.newSession(docbase, logInfo)

  if docbase_session is not None:
    print("Got a session")
    # do something in the session;
    folderId = DFCCommon.DfId("0c00c35080000105");
    folder = docbase_session.getObject(folderId);
    attrList = "r_object_id,r_object_type,object_name";
    coll = folder.getContents(attrList);
    while(coll.next()):
       print("ObjectId=" + "%-16s" % coll.getString("r_object_id") + 
             " r_object_type=" + "%-20s" % coll.getString("r_object_type") +
             " ObjectName=" + coll.getString("object_name"));
    print("Finished");
    docbase_session.disconnect()
  else:
    print("Didn't get a session")
except Exception:
    print(Exception)

Execution:

jython test.jy
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/usr/share/java/jython-2.5.3.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
...
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector
Finished

Ironically, the jython’s launcher is a perl script; it basically initializes java and python environment variables such as classpath, java options and jython home and path. If the initial WARNINGs are disruptive, edit that script and correct the problem or just redirect the stderr to null, e.g.:

jython test.jy 2> /dev/null

So, which one to choose ?

To summarize, the decision tree below may help choosing one scripting language among the preceding ones.

DfCs knowledge ?:
java proficiency ?:
choose groovy with the DfCs
else python proficiency ?:
choose jython with the DfCs
else select one of the following ones:
get acquainted with one of the above languages
| choose another JVM-based language
| give up the DfCs and use DQL/API with perl, python of gawk instead (see below)
else perl proficiency ?:
choose db::Documentum
else python proficiency ?:
choose python and DctmAPI.py
else nawk/gawk proficiency ?
choose gawk and dmgawk binding
else select one of the following:
learn one of the above scripting languages
| develop a Documentum binding for your preferred scripting language not in the list
| hire dbi-services for your administrative tasks or projects ;-)

DfCs are clearly indicated to java programmers. They are still supported and new features are always accessible from them. There are tasks which cannot be done through the API or DQL and only through the DfCs, but generally those are out of the scope of an administrator. Note that even the non java and DfCs languages still finish up invoking the DfCs in the background because they are linked with the libdmcl.so library and that one makes JNI behind-the-scene calls to the DfCs for them, thus hiding their complexity. Hopefully, this shared library will stay with us still for some time otherwise our scripting language choice will be seriously restricted to JVM-based languages and the DfCs.

Cet article A few scripting languages for Documentum est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 3 – Change Server Config Name

$
0
0

In the previous blog I changed the Docbase Name to repository1 instead of RepoTemplate using MigrationUtil, in this blog it is Server Config Name’s turn to be changed.

In general, the repository name and the server config name are the same except in High availability case.
You can find the Server Config Name in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

1. Migration preparation

To change the server config name to repository1, you need first to update the configuration file of MigrationUtil, like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repository1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeServerName">yes</entry>
<entry key="NewServerName.1">repository1</entry>

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Execute the migration

Use the below script to execute the migration:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration.

2.a Stop the Docbase and the DocBroker

$DOCUMENTUM/dba/dm_shutdown_repository1
$DOCUMENTUM/dba/dm_stop_DocBroker

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = dctmdb.local
...

2.c Execute the migration script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Change...
Skipping Host Name Change...
Skipping Install Owner Change...

Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Changing Server Name...
Database owner password is read from config.xml
Finished changing Server Name...

Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Start: 2019-02-02 19:55:52.531
Changing Server Name
=====================

DocbaseName: repository1
Retrieving server.ini path for docbase: repository1
Found path: /app/dctm/product/16.4/dba/config/repository1/server.ini
ServerName: RepoTemplate
New ServerName: repository1

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Validating Server name with existing servers...
select object_name from dm_sysobject_s where r_object_type = 'dm_server_config'

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_server_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3d0f449880000102'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.repository1' where r_object_id = '080f4498800010a9'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_cont_transfer_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.repository1' where r_object_id = '080f4498800004ba'
select r_object_id,target_server from dm_job_s where target_server like '%repository1.RepoTemplate%'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800010d3'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035e'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035f'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000360'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000361'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000362'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000363'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000364'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000365'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000366'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000367'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000372'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000373'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000374'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000375'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000376'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000377'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000378'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000379'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037a'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037b'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000386'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000387'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000388'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000389'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000e42'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000cb1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d02'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d04'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d05'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003db'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dc'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dd'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003de'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003df'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e0'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e2'
Successfully updated database values...

Processing File changes...
Backed up '/app/dctm/product/16.4/dba/config/repository1/server.ini' to '/app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repository1/server.ini
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Finished processing File changes...
Finished changing server name 'repository1'

Processing startup and shutdown scripts...
Backed up '/app/dctm/product/16.4/dba/dm_start_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup'
Updated dm_startup script.
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup'
Updated dm_shutdown script.

Finished changing server name....
End: 2019-02-02 19:55:54.687

2.d Reset the value of database_conn in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = repository1
database_conn = DCTMDB
...

3. Check after update

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the log to be sure that the repository has been started correctly. Notice that the log name has been changed from RepoTemplate.log to repository1.log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/repository1.log
...
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:09.807613	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29345, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:10.809686	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29362, session 010f44988000000c) is started sucessfully."

4. Manual rollback is possible?

In fact, in the MigrationUtilLogs folder, you can find logs, backup of start/stop scripts, and also the sql file for manual rollback:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
total 980
-rw-rw-r-- 1 dmadmin dmadmin   4323 Feb  2 19:55 ServerNameChange_DatabaseRestore.sql
-rwxrw-r-- 1 dmadmin dmadmin   2687 Feb  2 19:55 dm_start_repository1_server_RepoTemplate.backup
-rwxrw-r-- 1 dmadmin dmadmin   3623 Feb  2 19:55 dm_shutdown_repository1_server_RepoTemplate.backup
-rw-rw-r-- 1 dmadmin dmadmin   6901 Feb  2 19:55 ServerNameChange.log

lets see the content of the sql file :

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql
update dm_sysobject_s set object_name = 'RepoTemplate' where r_object_id = '3d0f449880000102';
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.RepoTemplate' where r_object_id = '080f4498800010a9';
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.RepoTemplate' where r_object_id = '080f4498800004ba';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800010d3';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035e';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035f';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000360';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000361';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000362';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000363';
...

I already noticed that a manual rollback is possible after Docbase ID and Docbase Name change but I didn’t test it… I would like to try this one.
So to rollback:
Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

Execute the sql

[dmadmin@vmtestdctm01 ~]$ cd $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
[dmadmin@vmtestdctm01 MigrationUtilLogs]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 17 19:53:12 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn RepoTemplate@DCTMDB
Enter password: 
Connected.
SQL> @ServerNameChange_DatabaseRestore.sql
1 row updated.
1 row updated.
1 row updated.
...

The DB User is still RepoTemplate, it hasn’t been changed when I changed the docbase name

Copy back the files saved, you can find the list of files updated and saved in the log:

cp /app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup /app/dctm/product/16.4/dba/config/repository1/server.ini
cp /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_start_repository1
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_shutdown_repository1

Think about changing back the the database connection in /app/dctm/product/16.4/dba/config/repository1/server.ini (see 2.d step).

Then start the DocBroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the repository log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-02T20:15:59.677595	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19232, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:00.679566	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19243, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:01.680888	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19255, session 010f44988000000c) is started sucessfully."

Yes, the rollback works correctly! :D Despite this, I hope you will not have to do it on a production environment. ;)

Cet article Documentum – MigrationUtil – 3 – Change Server Config Name est apparu en premier sur Blog dbi services.


Documentum – Documents not transferred to WebConsumer

$
0
0

Receiving an incident is not always a pleasure, but it is always the case when we share the solution!
A few days ago, I received an incident regarding WebConsumer on a production environment, saying that documents are not transferred as expected to WebConsumer.

The issue didn’t happened for all documents, that’s why I directly suspect the High Availability configuration on this environment. Moreover, I know that the IDS is installed only on CS1 (as designed). So I checked the JMS logs on :
CS1 : No errors found there.

CS2 : Errors found :

2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-11 04:05:39,141 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.044s
2019-02-11 04:05:39,773 UTC INFO  [stdout] (default task-89) 2019-02-11 04:05:39,773 UTC ERROR [com.domain.repository1.dctm.methods.WCPublishDoc] (default task-89) DfException:: THREAD: default task-89; 
MSG: [DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND]error:  "The app_server_name/servlet_name 'WebCache' is not specified in dm_server_config/dm_jms_config."; ERRORCODE: 100; NEXT: null

To cross check:

On CS1:

[dmadmin@CONTENT_SERVER1 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER1 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
0

On CS2:

[dmadmin@CONTENT_SERVER2 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER2 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
60

So I checked the app servers list configured in the dm_server_config:

On CS1:

API> retrieve,c,dm_server_config
...
3d01e24080000102
API> dump,c,3d01e24080000102
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: WebCache
                               [4]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://CONTENT_SERVER1:6679/services/scs/publish
                               [4]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Good, WebCache is configured here.

On CS2:

API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Ok! The root cause of this error is clear now.

The concerned method is WCPublishDocumentMethod, but applied when? by who?

I noticed that in the log above:

D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'

So, WCPublishDocumentMethod applied by the D2LifecycleConfig, which is applied when? by who?
Sought in the erver.log file and found:

2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : User  : repository1
2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : New session manager creation.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager set identity.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager get session.
2019-02-11 04:05:06,006 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem ID: 4a01e2408002bd3d
2019-02-11 04:05:06,023 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow tracker...
2019-02-11 04:05:06,031 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow config...
2019-02-11 04:05:06,032 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Get packaged documents...
2019-02-11 04:05:06,067 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Apply on masters...
2019-02-11 04:05:06,068 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem acquire...
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Applying lifecycle (Target state : On Approved / Transition :promote
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : No workflow properties
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching target state name and/or transition type.
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target state name :On Approved
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target transition type :promote
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Performing D2 lifecycle on :FRM-8003970 (0901e240800311cd)
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching associated D2 lifecycle...
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::getInstancesForObject start time 0.000s
...
2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
...

Hummmm, the D2WFLifeCycleMethod is applied by the job D2JobLifecycleBatch. I checked the target server of this job:

1> SELECT target_server FROM dm_job WHERE object_name='D2JobLifecycleBatch';
2> go
target_server                                                                                                                                                                               
-------------
 
(1 row affected)

As I suspected, no target server defined! That’s mean that the job can be executed on “Any Running Server”, that’s why this method has been executed on CS2… While CS2 is not configured to do so.

Now, two solutions are possible:
1. Change the target_server to use only CS1 (idql):

UPDATE dm_job OBJECTS SET target_server='repository1.repository1@CONTENT_SERVER1' WHERE object_name='D2JobLifecycleBatch';

2. Add the app server WebCache to CS2, pointing to CS1 (iapi):

API>fetch,c,dm_server_config
API>append,c,l,app_server_name
WebCache
API>append,c,l,app_server_uri

https://CONTENT_SERVER1:6679/services/scs/publish

API>save,c,l

Check after update:
API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
                               [4]: WebCache
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
                               [4]: https://CONTENT_SERVER1:6679/services/scs/publish
...

We choose the second option, because:
– The job is handled by the application team,
– Modifying the job to run only on CS1 will resolve this case, but if the method is applied by another job or manually on CS2, we will get again the same error.

After this update no error has been recorded in the log file:

...
2019-02-12 04:06:10,948 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.007s
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : No ARG_RETURN_ID in mapArguments
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : newObject created, user session used: 0801e2408023f714
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.D2SysObject                    : getFolderIdFromCache: got folder: /System/D2/Data/c6_method_return, object id: 0b01e2408000256b, docbase: repository1
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : mapArguments: {-method_return_id=0801e2408023f714}
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : origArguments: {-id=0901e24080122a59}
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodName: WCPublishDocumentMethod
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodParams: -id 0901e24080122a59 -user_name dmadmin -docbase_name repository1
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : Start WCPublishDocumentMethod method with JMS (Java Method Services) runLocally hint set is false
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : key: -method_return_id, and value: 0801e2408023f714
...

I hope this blog will help you to quickly resolve this kind of incident.

Cet article Documentum – Documents not transferred to WebConsumer est apparu en premier sur Blog dbi services.

Documentum : Dctm job locked after docbase installation

$
0
0

A correct configuration of Documentum jobs is paramount, that’s why it is the first thing we do after the docbase installation.
A few days ago, I configured the jobs on a new docbase using DQL, and I got an error because a job is locked by the user dmadmin.

The error message was:

DQL> UPDATE dm_job OBJECTS SET target_server=' ' WHERE target_server!=' ' ;
...
[DM_QUERY_F_UP_SAVE]fatal:  "UPDATE:  An error has occurred during a save operation."

[DM_SYSOBJECT_E_LOCKED]error:  "The operation on dm_FTQBS_WEEKLY sysobject was unsuccessful because it is locked by user dmadmin."

I checked the status of this job:

API> ?,c,select r_object_id from dm_job where object_name ='dm_FTQBS_WEEKLY';
r_object_id
----------------
0812D68780000ca6
(1 row affected)

API> dump,c,0812D68780000ca6
...
USER ATTRIBUTES

  object_name                     : dm_FTQBS_WEEKLY
  title                           :
  subject                         : qbs weekly job
...
  start_date                      : 2/28/2019 05:21:15
  expiration_date                 : 2/28/2027 23:00:00
...
  is_inactive                     : T
  inactivate_after_failure        : F
...
  run_now                         : T
...

SYSTEM ATTRIBUTES

  r_object_type                   : dm_job
  r_creation_date                 : 2/28/2019 05:21:15
  r_modify_date                   : 2/28/2019 05:24:48
  r_modifier                      : dmadmin
...
  r_lock_owner                    : dmadmin
  r_lock_date                     : 2/28/2019 05:24:48
...

APPLICATION ATTRIBUTES

...
  a_status                        :
  a_is_hidden                     : F
...
  a_next_invocation               : 3/7/2019 05:21:15

INTERNAL ATTRIBUTES

  i_is_deleted                    : F
...

The job was locked 3 minutes after the creation date… And still locked since (4 days).

Let’s check job logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*0812D68780000ca6*
-rw-r--r--. 1 dmadmin dmadmin   0 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6.lck
-rw-rw-rw-. 1 dmadmin dmadmin 695 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
Thu Feb 28 05:24:50 2019 [ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, 
status: 0, with error message [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error:  "The DocBroker running on host (CONTENT_SERVER1:1489) does not know of a server for the specified docbase (repository1)"
...NO HEADER (RECURSION) No session id for current job.
Thu Feb 28 05:24:50 2019 [FATAL ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, status: 0, with error message .
..NO HEADER (RECURSION) No session id for current job.

I noted three important information here:
1. The DocBroker consider that the docbase is stopped when the AgentExec sent the request.
2. The timestamp corresponds to the installation date of the docbase.
3. LAUNCHER 20749.

I checked the install logs to confirm the first point:

[dmadmin@CONTENT_SERVER1 ~]$ egrep " The installer will s.*. repository1" $DOCUMENTUM/product/7.3/install/logs/install.log*
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:03:24,757  INFO [main]  - The installer will start component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:24:39,588  INFO [main]  - The installer will stop component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:26:49,110  INFO [main]  - The installer will start component process for repository1.

The AgentExec logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*agentexec.log*
-rw-rw-rw-. 1 dmadmin dmadmin    640 Feb 28 05:24 agentexec.log.save.02.28.19.05.27.54
-rw-rw-rw-. 1 dmadmin dmadmin    384 Feb 28 05:36 agentexec.log.save.02.28.19.05.42.26
-rw-r-----. 1 dmadmin dmadmin      0 Feb 28 05:42 agentexec.log.save.02.28.19.09.51.24
...
-rw-r-----. 1 dmadmin dmadmin 569463 Mar  8 09:11 agentexec.log
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1/agentexec/agentexec.log.save.02.28.19.05.27.54
Thu Feb 28 05:17:48 2019 [INFORMATION] [LAUNCHER 19584] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:19 2019 [INFORMATION] [LAUNCHER 20191] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:49 2019 [INFORMATION] [LAUNCHER 20253] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:19 2019 [INFORMATION] [LAUNCHER 20555] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:49 2019 [INFORMATION] [LAUNCHER 20749] Detected during program initialization: Version: 7.3.0050.0039  Linux64

I found here the LAUNCHER 20749 noted above ;) So, this job corresponds to the last job executed by the AgentExec before being stopped.
The AgentExec was up, the Docbase should be up also, but the DocBroker said that the docbase is down :(

Now, the question is : when execatly the DocBroker was informed that the docbase is shut down?

[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1.log.save.02.28.2019.05.26.49
...
2019-02-28T05:24:48.644873      20744[20744]    0112D68780000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (CONTENT_SERVER1) with port (1489).  
Information: (Config(repository1), Proximity(1), Status(Server shut down by user (dmadmin)), Dormancy Status(Active))."

To recapitulate:
– 05:24:48.644873 : Docbase shut down and DocBroker informed
– 05:24:49 : AgentExec sent request to DocBroker

So, we can say that the AgentExec was still alive after the docbase stop!

Now, to resolve the issue it is easy :D

API> unlock,c,0812D68780000ca6
...
OK

I didn’t find in the logs when exactly the docbase stop the AgentExec, I guess the docbase request the stop (kill) but don’t check if it has been really stopped.
I confess that I encounter this error many times after docbase installation, that’s why it is useful to know why and how to resolve it quickly. I advise you to configure Dctm jobs after each installation, at least check if the r_lock_date is set and if it is justified.

Cet article Documentum : Dctm job locked after docbase installation est apparu en premier sur Blog dbi services.

OpenText Enterprise World Europe 2019 – Partner Day

$
0
0

First day of the #OTEW here at the Austria International Center in Vienna, Guillaume Fuchs and I where invited to assist to the Partner Global sessions.

Welcome to OTEW Vienna 2019

img4Mark J. Barrenechea, the OpenText’s CEO & CTO, started the day with some generic topics concerning the global trends and achievements like:

  • More and More partners and sponsors
  • Cloud integration direction
  • Strong security brought to customers
  • AI & machine learning new trend
  • New customer wave made of Gen Z and millennials to consider
  • OpenText #1 in Content Services in 2018
  • Turned to the future with Exabytes goals (high level transfers and storage)
  • Pushing to upgrade to version 16 with most complete Content Platform ever for security and integration
  • Real trend of SaaS with the new OT2 solutions

OpenText Cloud and OT2 is the future

img1

Today the big concern is the sprawl of data, OpenText is addressing this point by centralizing data and flux and create an information advantage. Using Cloud and OT2 SaaS, PaaS will open the business to every thing.

OT2 is the EIM as a service, it’s an hybrid cloud platform that brings security and scalability to customers solutions which you can integrates to leading applications like O365 Microsoft Teams, Documentum and more, it provides SaaS as well. One place for your data and many connectors to it. More info on it to come, stay tuned.

Smart View is the default

Smart View is the new OpenText UI default for every components such as D2 for documentum, SAP integration, Extended ECM, SuccessFactor and so on.

img3img5

Documentum and D2

New features:

  • Add documents to subfodlers without opening folder first
  • Multi-items download -> Zip and download
  • Download phases displayed in progress bar
  • Pages editable inline with smart view
  • Possibility to add widgets in smart view
  • Workspace look improved in smart view
  • Image/media display improved: Galery View with sorting, filters by name
  • Threaded discussion in smart view look and feel
  • New permission management visual representation
  • Mobile capabilities
  • Integrated in other lead applications (Teams, SAP, Sharepoint and so on…)

img6img7

OpenText Roadmap

OpenText trends are the following:

  • New UI for products: Smart View: All devices, well integrated to OT2
  • Content In Context
    • Embrace Office 365, with Documentum integration
    • Integration of documentum in SAP
  • Push to Cloud
    • More cloud based product: Docker, Kubernetes
    • Run applications anywhere with OpenText Cloud, Azure, AWS, Google
    • SaaS Applications & Services on OT2
  • Line Of Business
    • SAP applications
    • LoB solutions like SuccessFactors
    • Platform for industry solutions like Life Science, Engineering and Government
  • Intelligent Automation
    • Information extraction with machine learning (Capture)
    • Cloud capture apps for SAP, Salesforce, etc
    • Drive automation with Document Generation
    • Automatic sharing with OT Core
    • Leverage Magellan and AI
    • Personal Assistant / Bots
  • Governance:
    • Smart Compliance
    • GDPR and DPA ready
    • Archiving and Application decommissioning

Conclusion

After this first day at OTEW we can see that OpenText is really pushing on new UI with Smart View, as well as centralized services and storage with OT2 and OpenText Cloud solutions. Content Services will become the angular stone for all content storage with plugged interfaces and components provided by the OT2 platform.

Cet article OpenText Enterprise World Europe 2019 – Partner Day est apparu en premier sur Blog dbi services.

OpenText Enterprise World Europe 2019 – Day 2

$
0
0

Day 2 of OTEW, we followed the global stream this morning which was taking most of the points from yesterday. But we had the pleasure to have a session from Dr. Michio Kaku, Theoretical Physicist, Futurist and popularizer of science. He wrote several books about physics and how he sees the future.

kaku

He sees us in the next 20 years ultra connected with internet lenses, the Moore’s law will collapse in 2025 where it will probably be replaced by Graphene technology (instead of basic transistors), which will, in an unknown perspective, be replaced by Quantum calculation machines (q-bits instead of bits). The main issue with quantum calculation is that q-bit are really disrupted by noises and electromagnetic waves (decoherence). According to him, internet will be replaced by brain net thanks to biological new technologies focusing on sensations instead of visualization.

What’s new and what’s next for OpenText Documentum

We were totally waiting for this session as we, documentum experts, were exited to see the future of this well spread technology. Micah Byrd, Director Product Management at OpenText started to talk about the generic integration roadmap with “Content in Context”, “Cloud”, “LoB and industry” and “Intelligent automation” and how Documentum interprets these guidelines.

Documentum will be more and more integrated to Office 365 thanks to the new UI Smart View. A Coherent solution across all platforms which allows easy and seamless fusion into leading applications like Word and SAP. This is content in context.

OpenText is aggressively pushing Documentum to the cloud since several years with custom solutions like private, managed or public cloud. With Private you keep your data on your data center (2014-2016). With Managed your data goes to OpenText cloud (2017-2018). With Public your data goes where you want on different cloud providers like AWS, Azure, Google and so on (2019). OpenText invests on containerization as well with Docker and Kubernetes for “Documentum from Everywhere”.

 Documentum future innovations

As part of the main new features we have the continous integration of Documentum in Office 365 which already supports Word and SAP and soon (EP7 in October) Excel, Power Point and Outlook. It means that you’ll be able to access Documentum data from Office softwares. In addition OpenText wants to enable Bi-Directional synchronization between Documentum and Core, implying possibilities of interrecting with content outside of the corporate network. Hence, the content will be synced no matter where, no matter when, in a secure and controlled way.

img10

Next to come is also improved content creation experience in D2 thanks to more integration of Brava! for annotation sharing as well as more collaborative capabilities with Share point (improvement of DC4SP).

img11

A new vision of security:

img12

D2 on mobile will come soon on IOS and Android, developed in AppWorks:

img13

We are particularly exited about a prototype which was presented today: the Documentum Security Dashboard. It gives a quick and easy view of user activities and tracks the content usage like views and downloads and can demonstrate trends about content evolution. We hope it will be released one day.

img14

Many more topics around Documentum components where presented but we will not provide details here about it, we were only focusing on main features.

Documentum D2 Demo

We had a chance to put our hands on the new D2 Smart View which brings reactivity and modernity. Our feeling about it is: SMOOTH.

img15

Conclusion

Another amazing day at the OTEW where we met a lot of expert people and attended interesting sessions about the huge OpenText world.

Cet article OpenText Enterprise World Europe 2019 – Day 2 est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 4 – Change Host Name

$
0
0

In this blog I will change the Host Name, it comes after three blogs to change the Docbase ID, Docbase Name, and Server Config Name, hope that you already read them, if not don’t delay 😉

So, let’s change the Host Name!

1. Migration preparation

Update the configuration file of the Migration Utility:

[dmadmin@vmtestdctm01 ~]$ vi $DM_HOME/install/external_apps/MigrationUtil/config.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">docbase1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...
<entry key="ChangeHostName">yes</entry>
<entry key="HostName">vmtestdctm01</entry>
<entry key="NewHostName">vmtestdctm02</entry>
...
</properties>

Be careful, the hostname may be FQDN or not, before any change check using “hostname –fqdn” and compare what you have in place.
You can also use select queries from the log of my migration below to be sure
😉

Stop the Docbase and the Docbroker:

$DOCUMENTUM/dba/dm_shutdown_docbase1
$DOCUMENTUM/dba/dm_stop_DocBroker

Update the database name in the server.ini file, it is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local:

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/docbase1/server.ini
...
[SERVER_STARTUP]
docbase_id = 123456
docbase_name = docbase1
server_config_name = docbase1
database_conn = dctmdb.local
database_owner = docbase1
...

Don’t worry, we will roll back this change before docbase start.

Add the vmtestdctm02 in /etc/hosts

[root@vmtestdctm01 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.122.1 vmtestdctm01 vmtestdctm02

2. Execute the Migration

Execute the migration script.

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...

Changing Host Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/HostNameChange.log
Finished changing host name...Please check log file for more details/errors
Finished changing Host Name...

Skipping Install Owner Change...
Skipping Server Name Change...
Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

Check the log content to understand what has been changed and check errors if any.

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/HostNameChange.log
Start: 2019-04-09 18:55:48.613
Changing Host Name
=====================
HostName: vmtestdctm01
New HostName: vmtestdctm02
Changing HostName for docbase: docbase1
Retrieving server.ini path for docbase: docbase1
Found path: /app/dctm/product/16.4/dba/config/docbase1/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/HostNameChange_docbase1_DatabaseRestore.sql'
Processing _s table...
select r_object_id,r_host_name from dm_server_config_s where lower(r_host_name) = lower('vmtestdctm01')
update dm_server_config_s set r_host_name = 'vmtestdctm02' where r_object_id = '3d01e24080000102'
select r_object_id,r_install_domain from dm_server_config_s where lower(r_install_domain) = lower('vmtestdctm01')
select r_object_id,web_server_loc from dm_server_config_s where lower(web_server_loc) = lower('vmtestdctm01')
update dm_server_config_s set web_server_loc = 'vmtestdctm02' where r_object_id = '3d01e24080000102'
select r_object_id,host_name from dm_mount_point_s where lower(host_name) = lower('vmtestdctm01')
update dm_mount_point_s set host_name = 'vmtestdctm02' where r_object_id = '3e01e24080000149'
select r_object_id,user_os_domain from dm_user_s where lower(user_os_domain) = lower('vmtestdctm01')
select r_object_id,user_global_unique_id from dm_user_s where lower(user_global_unique_id) like lower('vmtestdctm01:%')
select r_object_id,user_login_domain from dm_user_s where lower(user_login_domain) = lower('vmtestdctm01')
select r_object_id,target_server from dm_job_s where lower(target_server) like lower('%@vmtestdctm01')
update dm_job_s set target_server = 'docbase1.docbase1@vmtestdctm02' where r_object_id = '0801e240800003d6'
...
update dm_job_s set target_server = 'docbase1.docbase1@vmtestdctm02' where r_object_id = '0801e24080000384'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and lower(object_name) like lower('%vmtestdctm01%')
update dm_sysobject_s set object_name = 'JMS vmtestdctm02:9080 for docbase1.docbase1' where r_object_id = '0801e240800010a4'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_outputdevice' and lower(object_name) like lower('%vmtestdctm01%')
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_client_registration' and lower(object_name) like lower('%vmtestdctm01%')
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_WM6Aoa' where r_object_id = '0801e24080000581'
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_CqJKIa' where r_object_id = '0801e2408000058b'
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_uEp7oa' where r_object_id = '0801e24080001107'
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_j44a0a' where r_object_id = '0801e24080001111'
select r_object_id,host_name from dm_client_registration_s where lower(host_name) = lower('vmtestdctm01')
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e2408000058b'
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080001107'
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080000581'
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080001111'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_client_rights' and lower(object_name) like lower('%vmtestdctm01%')
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_WM6Aoa' where r_object_id = '0801e24080000582'
select r_object_id,host_name from dm_client_rights_s where lower(host_name) = lower('vmtestdctm01')
update dm_client_rights_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080000582'
Successfully updated database values...
Processing _r table...
select r_object_id,base_uri,i_position from dm_sysprocess_config_r where lower(base_uri) like lower('%//vmtestdctm01:%') or lower(base_uri) like lower('%//vmtestdctm01.%:%')
update dm_sysprocess_config_r set base_uri = 'http://vmtestdctm02:9080/DmMail/servlet/DoMail' where r_object_id = '0801e240800010a4' and i_position = -3
update dm_sysprocess_config_r set base_uri = 'http://vmtestdctm02:9080/SAMLAuthentication/servlet/ValidateSAMLResponse' where r_object_id = '0801e240800010a4' and i_position = -2
update dm_sysprocess_config_r set base_uri = 'http://vmtestdctm02:9080/DmMethods/servlet/DoMethod' where r_object_id = '0801e240800010a4' and i_position = -1
select r_object_id,projection_targets,i_position from dm_sysprocess_config_r where lower(projection_targets) = lower('vmtestdctm01')
update dm_sysprocess_config_r set projection_targets = 'vmtestdctm02' where r_object_id = '0801e240800010a4' and i_position = -1
select r_object_id,acs_base_url,i_position from dm_acs_config_r where lower(acs_base_url) like lower('%//vmtestdctm01:%') or lower(acs_base_url) like lower('%//vmtestdctm01.%:%')
update dm_acs_config_r set acs_base_url = 'http://vmtestdctm02:9080/ACS/servlet/ACS' where r_object_id = '0801e24080000490' and i_position = -1
select r_object_id,method_arguments,i_position from dm_job_r where lower(method_arguments) like lower('%vmtestdctm01%')
select r_object_id,projection_targets,i_position from dm_server_config_r where lower(projection_targets) = lower('vmtestdctm01')
select r_object_id,a_storage_param_value,i_position from dm_extern_store_r where lower(a_storage_param_value) like lower('%//vmtestdctm01:%') or lower(a_storage_param_value) like lower('%//vmtestdctm01.%:%')
Successfully updated database values...
Committing all database operations...

Processing server.ini changes for docbase: docbase1
Backed up '/app/dctm/product/16.4/dba/config/docbase1/server.ini' to '/app/dctm/product/16.4/dba/config/docbase1/server.ini_host_vmtestdctm01.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/docbase1/server.ini

Finished changing host name for docbase:docbase1

Processing DFC properties changes...
Backed up '/app/dctm/product/16.4/config/dfc.properties' to '/app/dctm/product/16.4/config/dfc.properties_host_vmtestdctm01.backup'
Updated dfc.properties file: /app/dctm/product/16.4/config/dfc.properties
No need to update dfc.properties file: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties
No need to update dfc.properties file: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties
File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear/APP-INF/classes/dfc.properties doesn't exist
Backed up '/app/dctm/product/16.4/product/16.4/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/documentum.config/dfc.properties' to '/app/dctm/product/16.4/product/16.4/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/documentum.config/dfc.properties_host_vmtestdctm01.backup'
Updated dfc.properties file: /app/dctm/product/16.4/product/16.4/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/documentum.config/dfc.properties
Finished processing DFC properties changes...

Processing File changes...
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_host_vmtestdctm01.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_DMS/deployments/DMS.ear/lib/configs.jar/dms.properties doesn't exist
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear/XhiveConnector.war/WEB-INF/web.xml doesn't exist
Backed up '/app/dctm/product/16.4/dba/dm_launch_DocBroker' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_launch_DocBroker_host_vmtestdctm01.backup'
Updated /app/dctm/product/16.4/dba/dm_launch_DocBroker
Backed up '/app/dctm/product/16.4/dba/dm_stop_DocBroker' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_stop_DocBroker_host_vmtestdctm01.backup'
Updated /app/dctm/product/16.4/dba/dm_stop_DocBroker
Finished processing File changes...

Finished changing host name...
End: 2019-04-09 18:55:50.948

3. Post Migration

Remove vmtestdctm01 from /etc/hosts.

[root@vmtestdctm02 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.122.1 vmtestdctm02

It is important to think about other applications/databases installed on the same server before this step.

Revert change done in the server.ini file.

[dmadmin@vmtestdctm02 ~]$ vi $DOCUMENTUM/dba/config/docbase1/server.ini
...
[SERVER_STARTUP]
docbase_id = 123456
docbase_name = docbase1
server_config_name = docbase1
database_conn = DCTMDB
database_owner = docbase1
...

Start the DocBroker:

[dmadmin@vmtestdctm02 ~]$ $DOCUMENTUM/dba/dm_launch_DocBroker
starting connection broker on current host: [vmtestdctm02]
with connection broker log: [/app/dctm/product/16.4/dba/log/docbroker.vmtestdctm02.1489.log]
connection broker pid: 11863

Start the Docbase:

[dmadmin@vmtestdctm02 ~]$ $DOCUMENTUM/dba/dm_start_docbase1
starting Documentum server for repository: [docbase1]
with server log: [/app/dctm/product/16.4/dba/log/docbase1.log]
server pid: 12810

Check docbase log:

[dmadmin@vmtestdctm02 ~]$ cat $DOCUMENTUM/dba/log/docbase1.log
...
2019-04-09T19:11:30.915327	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 13776, session 0101e24080000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-04-09T19:11:30.916008	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 13777, session 0101e2408000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-04-09T19:11:31.917818	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 13786, session 0101e2408000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-04-09T19:11:32.918943	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 13798, session 0101e2408000000c) is started sucessfully."
2019-04-09T19:11:33.919701	13732[13732]	0000000000000000	[DM_SERVER_I_START]info:  "Sending Initial Docbroker check-point "
2019-04-09T19:11:33.927309	13732[13732]	0000000000000000	[DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 13810, session 0101e24080000456) is started sucessfully."
2019-04-09T19:11:34.639677	13809[13809]	0101e24080000003	[DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (vmtestdctm02) with port (1490).  Information: (Config(docbase1), Proximity(1), Status(Open), Dormancy Status(Active))."

Get the docbase map from the docbroker:

[dmadmin@vmtestdctm02 ~]$ dmqdocbroker -t vmtestdctm02 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm02
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm02 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : docbase1
Docbase id          : 123456
Docbase description : First docbase
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     : 
--------------------------------------------

idql query for a quick check:

dmadmin@vmtestdctm02 ~]$ idql docbase1
...
Connected to OpenText Documentum Server running Release 16.4.0000.0248  Linux64.Oracle
1> select user_login_name from dm_user where user_name='dmadmin';
2> go
user_login_name                                                                                                                                                                                                                                                
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dmadmin                                                                                                                                                                                                                                                        
(1 row affected)

4. Conclusion

This is a helpful way to change the Host Name, I tried it many times and I can say that it works very well.
For the moment all changes done was only on a simple environment, maybe the next blog will talk about a change on a High Availability one 😉
Did you already practice this tool? Don’t hesitate to share your experience!

Cet article Documentum – MigrationUtil – 4 – Change Host Name est apparu en premier sur Blog dbi services.

How to stop Documentum processes in a docker container, and more (part I)

$
0
0

How to stop Documentum processes in a docker container, and more

Ideally, but not mandatorily, the management of Documentum processes is performed at the service level, e.g. by systemd. In my blog here, I showed how to configure init files for Documentum under systemd. But containers don’t have systemd, yet. They just run processes, often only one, sometimes more if they are closely related together (e.g. the docbroker, the method server and the content servers), so how to replicate the same functionality with containers ?
The topic of stopping processes in a docker container is abundantly discussed on-line (see for example the excellent article here). O/S signals are the magic solution so much so that I should have entitled this blog “Fun with the Signals” really !
I’ll simply see here if the presented approach can be applied in the particular case of a dockerized Documentum server. However, in order to keep things simple and quick, I won’t test such a real dockerized Documentum installation but rather use a script to simulate the Documentum processes, or any other processes at that since it is so generic.
But first, why bother with this matter ? During all the years that I have been administrating repositories I’ve never noticed anything going wrong after restarting a suddenly stopped server, be it after an intentional kill, a pesky crash or an unplanned database unavailability. Evidently, the content server (CS henceforth) seems quite robust in this respect. Or maybe we were simply lucky so far. Personally, I don’t feel confident if I don’t shut down cleanly a process or service that must be stopped; some data might be still buffered in the CS’ memory and not flushing them properly might introduce inconsistencies or even corruptions. The same goes when an unsuspected multi-step operation is started and aborted abruptly in the middle; ideally, transactions, if they are used, exist for this purpose but anything can go wrong during rollback. Killing a process is like slamming a door, it produces a lot of noise, vibrations in the walls, even damages in the long run and always leaves a bad impression behind. Isn’t it more comforting to clean up and shut the door gently ? Even then something can go wrong but at least it will be through no fault of our own.

A few Reminders

When a “docker container stop” is issued, docker sends the SIGTERM signal to the process with PID == 1 running inside the container. That process, if programmed to do so, can then react to the signal and do anything seen fit, typically shutting the running processes down cleanly. After a 10 seconds grace period, the container is stopped manu militari. In the case of Documentum processes, to put it politely, they don’t give a hoot to signals, except of course to the well-known, unceremonious SIGKILL one. Thus, a proxy process must be introduced which will accept the signal and invoke the proper shutdown scripts to stop the CS processes, usually the dm_shutdown_* and dm_stop_* scripts, or a generic one that takes care of everything, at start up and at shut down time.
Said proxy must run with PID == 1 i.e. it must be the first one started in the container. Sort of, but even if it is not the very first, its PID 1 parent can pass it the control by using one of the exec() family functions; unlike forking a process, those in effect allow a child process to replace its parent under the latter’s PID, kind of like in the Matrix movies the agents Smith inject themselves into someone else’s persona, if you will ;-). The main thing being that at one point the proxy becomes PID 1. Luckily for us, we don’t have to bother with this complexity for the dockerfile’s ENTRYPOINT[] clause takes care of everything.
The proxy also will be the one that starts the CS. In addition, since it must wait for the SIGTERM signal, it must never exit. It can indefinitely wait listening on a fake input (e.g. tail -f /dev/null), or wait for an illusory input in a detached container (e.g. while true; do read; done) or, better yet, do something useful like some light-weight monitoring.
While at it, the proxy process can listen to several conventional signals and react accordingly. For instance, a SIGUSR1 could mean “give me a docbroker docbase map” and a SIGUSR2 “restart the method server”. Admittedly, these actions could be done directly by just executing the relevant commands inside the container or from the outside command-line but the signal way is cheaper and, OK, funnier. So, let’s see how we can set all this up !

The implementation

As said, in order to focus on our topic, i.e. signal trapping, we’ve replaced the CS part with a simple simulation script, dctm.sh, that starts, stops and queries the status of dummy processes. It uses the bash shell and has been written under linux. Here it is:

#!/bin/bash
# launches in the background, or stops or queries the status of, a command with a conventional identification string in the prompt;
# the id is a random number determined during the start;
# it should be passed to enquiry the status of the started process or to stop it;
# Usage:
#   ./dctm.sh stop  | start | status 
# e.g.:
# $ ./dctm.sh start
# | process started with pid 13562 and random value 33699963
# $ psg 33699963
# | docker   13562     1  0 23:39 pts/0    00:00:00 I am number 33699963
# $ ./dctm.sh status 33699963
# $ ./dctm.sh stop 33699963
#
# cec - dbi-services - April 2019
#
trap 'help'         SIGURG
trap 'start_all'    SIGPWR
trap 'start_one'    SIGUSR1
trap 'status_all'   SIGUSR2
trap 'stop_all'     SIGINT SIGABRT
trap 'shutdown_all' SIGHUP SIGQUIT SIGTERM

verb="sleep"
export id_prefix="I am number"

func() {
   cmd="$1"
   case $cmd in
      start)
         # do something that sticks forever-ish, min ca. 20mn;
         (( params = 1111 * $RANDOM ))
         exec -a "$id_prefix" $verb $params &
         echo "process started with pid $! and random value $params"
         ;;
      stop)
         params=" $2"
         pid=$(ps -ajxf | gawk -v params="$params" '{if (match($0, " " ENVIRON["id_prefix"] params "$")) pid = $2} END {print (pid ? pid : "")}')
         if [[ ! -z $pid ]]; then
            kill -9 ${pid} &> /dev/null
            wait ${pid} &> /dev/null
         fi
         ;;
      status)
         params=" $2"
         read pid gid < <(ps -ajxf | gawk -v params="$params" '{if (match($0, " " ENVIRON["id_prefix"] params "$")) pid = $2 " " $3} END {print (pid ? pid : "")}')
         if [[ ! -z $pid ]]; then
            echo "random value${params} is used by process with pid $pid and pgid $gid"
         else
            echo "no such process running"
         fi
         ;;
   esac
}

help() {
   echo
   echo "send signal SIGURG for help"
   echo "send signal SIGPWR to start a few processes"
   echo "send signal SIGUSR1 to start a new process"
   echo "send signal SIGUSR2 for the list of started processes"
   echo "send signal SIGINT | SIGABRT  to stop all the processes"
   echo "send signal SIGHUP | SIGQUIT | SIGTERM to shutdown the processes and exit the container"
}

start_all() {
   echo; echo "starting a few processes at $(date +"%Y/%m/%d %H:%M:%S")"
   for loop in $(seq 5); do
      func start
   done

   # show them;
   echo; echo "started processes"
   ps -ajxf | grep "$id_prefix" | grep -v grep
}

start_one() {
   echo; echo "starting a new process at $(date +"%Y/%m/%d %H:%M:%S")"
   func start
}

status_all() {
   echo; echo "status of running processes at $(date +"%Y/%m/%d %H:%M:%S")"
   for no in $(ps -ef | grep "I am number " | grep -v grep | gawk '{print $NF}'); do
      echo "showing $no"
      func status $no
   done
}

stop_all() {
   echo; echo "shutting down the processes at $(date +"%Y/%m/%d %H:%M:%S")"
   for no in $(ps -ef | grep "I am number " | grep -v grep | gawk '{print $NF}'); do
      echo "stopping $no"
      func stop $no
   done
}

shutdown_all() {
   echo; echo "shutting down the container at $(date +"%Y/%m/%d %H:%M:%S")"
   stop_all
   exit 0
}

# -----------
# main;
# -----------

# starts a few dummy processes;
start_all

# display some usage explanation;
help

# make sure the container stays up and waits for signals;
while true; do read; done

The main part of the script starts a few processes, displays a help screen and then waits for input from stdin.
The script can be first tested outside a container as follows.
Run the script:

./dctm.sh

It will start a few easily distinguishable processes and display a help screen:

starting a few processes at 2019/04/06 16:05:35
process started with pid 17621 and random value 19580264
process started with pid 17622 and random value 19094757
process started with pid 17623 and random value 18211512
process started with pid 17624 and random value 3680743
process started with pid 17625 and random value 18198180
 
started processes
17619 17621 17619 1994 pts/0 17619 S+ 1000 0:00 | \_ I am number 19580264
17619 17622 17619 1994 pts/0 17619 S+ 1000 0:00 | \_ I am number 19094757
17619 17623 17619 1994 pts/0 17619 S+ 1000 0:00 | \_ I am number 18211512
17619 17624 17619 1994 pts/0 17619 S+ 1000 0:00 | \_ I am number 3680743
17619 17625 17619 1994 pts/0 17619 S+ 1000 0:00 | \_ I am number 18198180
 
send signal SIGURG for help
send signal SIGPWR to start a few processes
send signal SIGUSR1 to start a new process
send signal SIGUSR2 for the list of started processes
send signal SIGINT | SIGABRT to stop all the processes
send signal SIGHUP | SIGQUIT | SIGTERM to shutdown the processes and exit the container

Then, it will simply sit there and wait until it is asked to quit.
From another terminal, let’s check the started processes:

ps -ef | grep "I am number " | grep -v grep
docker 17621 17619 0 14:40 pts/0 00:00:00 I am number 19580264
docker 17622 17619 0 14:40 pts/0 00:00:00 I am number 19094757
docker 17623 17619 0 14:40 pts/0 00:00:00 I am number 18211512
docker 17624 17619 0 14:40 pts/0 00:00:00 I am number 3680743
docker 17625 17619 0 14:40 pts/0 00:00:00 I am number 18198180

Those processes could be Documentum ones or anything else, the point here is to control them from the outside, e.g. another terminal session, in or out of a docker container. We will do that though O/S signals. The bash shell lets a script listen and react to signals through the trap command. On top of the script, we have listed all the signals we’d like the script to react upon:

trap 'help'         SIGURG
trap 'start_all'    SIGPWR
trap 'start_one'    SIGUSR1
trap 'status_all'   SIGUSR2
trap 'stop_all'     SIGINT SIGABRT
trap 'shutdown_all' SIGHUP SIGQUIT SIGTERM

It’s really a feast of traps !
The first line for example says that on receiving the SIGURG signal, the script’s function help() should be executed, no matter what the script was doing at that time, which in our case is just waiting for input from stdin.
The SIGPWR signal is interpreted as start in the background another suite of five processes with the same naming convention “I am number ” followed with a random number. The function start_all() is called on receiving this signal.
The SIGUSR1 signal starts one new process in the background. Function start_one() does just this.
The SIGUSR2 signal displays all the started processes so far by invoking function status_all().
The SIGINT and SIGABRT signals shut down all the started processes so far. Function stop_all() is called to this purpose.
Finally, signals SIGHUP, SIGQUIT, or SIGTERM all invokes function shutdown_all() to stop all the processes and exit the script.
Admittedly, those signal’s choice is a bit stretched out but this is for the sake of the demonstration so bear with us. Feel free to remap the signals to the functions any way you prefer.
Now, how to send those signals ? The ill-named kill command or program is here for this. Despite its name, nobody will be killed here fortunately; signals will be sent and processes decide to react opportunely. Here, of course, we do react opportunely.
Here is its syntax (let’s use the −−long-options for clarity):

/bin/kill --signal pid

Since bash has a built-in kill command that behaves differently, make sure to call the right program by specifying its full path name, /bin/kill.
Example of use:

/bin/kill --signal SIGURG $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
# or shorter:
/bin/kill --signal SIGURG $(pgrep ^dctm.sh$)

The signal’s target is our test program dctm.sh, which is identified vis a vis kill through its PID.
Signals can be specified by their full name, e.g. SIGURG, SIGPWR, etc… or without the SIG prefix such as URG, PWR, etc … or even through their numeric value as shown below:

/bin/kill -L
1 HUP 2 INT 3 QUIT 4 ILL 5 TRAP 6 ABRT 7 BUS
8 FPE 9 KILL 10 USR1 11 SEGV 12 USR2 13 PIPE 14 ALRM
15 TERM 16 STKFLT 17 CHLD 18 CONT 19 STOP 20 TSTP 21 TTIN
22 TTOU 23 URG 24 XCPU 25 XFSZ 26 VTALRM 27 PROF 28 WINCH
29 POLL 30 PWR 31 SYS
 
or:
 
kill -L
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR
31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3
38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7
58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX

Thus, the following incantations are equivalent:

/bin/kill --signal SIGURG $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal URG $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal 23 $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')

On receiving the supported signals, the related function is invoked and thereafter the script returns to its former activity, namely the loop that waits for a fake input. The loop is needed otherwise the script would exit on returning from a trap handler. In effect, the trap is processed like a function call and, on returning, the next statement at the point the trap occurred is given control. If there is none, then the script terminates. Hence the loop.
Here is the output after sending a few signals; for clarity, the signals sent from another terminal have been manually inserted as highlighted comments before the output they caused.
Output terminal:

# SIGUSR2:
status of running processes at 2019/04/06 16:12:46
showing 28046084
random value 28046084 is used by process with pid 29248 and pgid 29245
showing 977680
random value 977680 is used by process with pid 29249 and pgid 29245
showing 26299592
random value 26299592 is used by process with pid 29250 and pgid 29245
showing 25982957
random value 25982957 is used by process with pid 29251 and pgid 29245
showing 27830550
random value 27830550 is used by process with pid 29252 and pgid 29245
5 processes found
 
# SIGUSR1:
starting a new process at 2019/04/06 16:18:56
process started with pid 29618 and random value 22120010
 
# SIGUSR2:
status of running processes at 2019/04/06 16:18:56
showing 28046084
random value 28046084 is used by process with pid 29248 and pgid 29245
showing 977680
random value 977680 is used by process with pid 29249 and pgid 29245
showing 26299592
random value 26299592 is used by process with pid 29250 and pgid 29245
showing 25982957
random value 25982957 is used by process with pid 29251 and pgid 29245
showing 27830550
random value 27830550 is used by process with pid 29252 and pgid 29245
showing 22120010
random value 22120010 is used by process with pid 29618 and pgid 29245
6 processes found
 
# SIGURG:
send signal SIGURG for help
send signal SIGPWR to start a few processes
send signal SIGUSR1 to start a new process
send signal SIGUSR2 for the list of started processes
send signal SIGINT | SIGABRT to stop all the processes
send signal SIGHUP | SIGQUIT | SIGTERM to shutdown the processes and exit the container
 
# SIGINT:
shutting down the processes at 2019/04/06 16:20:17
stopping 28046084
stopping 977680
stopping 26299592
stopping 25982957
stopping 27830550
stopping 22120010
6 processes stopped
 
# SIGUSR2:
status of running processes at 2019/04/06 16:20:18
0 processes found
 
# SIGPWR:
starting a few processes at 2019/04/06 16:20:50
process started with pid 29959 and random value 2649735
process started with pid 29960 and random value 14971836
process started with pid 29961 and random value 14339677
process started with pid 29962 and random value 4460665
process started with pid 29963 and random value 12688731
5 processes started
 
started processes:
29245 29959 29245 1994 pts/0 29245 S+ 1000 0:00 | \_ I am number 2649735
29245 29960 29245 1994 pts/0 29245 S+ 1000 0:00 | \_ I am number 14971836
29245 29961 29245 1994 pts/0 29245 S+ 1000 0:00 | \_ I am number 14339677
29245 29962 29245 1994 pts/0 29245 S+ 1000 0:00 | \_ I am number 4460665
29245 29963 29245 1994 pts/0 29245 S+ 1000 0:00 | \_ I am number 12688731
 
# SIGUSR2:
status of running processes at 2019/04/06 16:20:53
showing 2649735
random value 2649735 is used by process with pid 29959 and pgid 29245
showing 14971836
random value 14971836 is used by process with pid 29960 and pgid 29245
showing 14339677
random value 14339677 is used by process with pid 29961 and pgid 29245
showing 4460665
random value 4460665 is used by process with pid 29962 and pgid 29245
showing 12688731
random value 12688731 is used by process with pid 29963 and pgid 29245
5 processes found
 
# SIGTERM:
shutting down the container at 2019/04/06 16:21:42
 
shutting down the processes at 2019/04/06 16:21:42
stopping 2649735
stopping 14971836
stopping 14339677
stopping 4460665
stopping 12688731
5 processes stopped

In the command terminal:

/bin/kill --signal SIGUSR2 $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGUSR1 $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGUSR2 $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGURG $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGINT $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGUSR2 $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGPWR $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGUSR2 $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
/bin/kill --signal SIGTERM $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')

Of course, sending the untrappable SIGKILL signal will abort the process that executes dctm.sh. However, its children processes will survive and be reparented to the root process:

...
status of running processes at 2019/04/10 22:38:25
showing 19996889
random value 19996889 is used by process with pid 24520 and pgid 24398
showing 5022831
random value 5022831 is used by process with pid 24521 and pgid 24398
showing 1363197
random value 1363197 is used by process with pid 24522 and pgid 24398
showing 18185959
random value 18185959 is used by process with pid 24523 and pgid 24398
showing 10996678
random value 10996678 is used by process with pid 24524 and pgid 24398
5 processes found
# /bin/kill --signal SIGKILL $(ps -ef | grep dctm.sh | grep -v grep | gawk '{print $2}')
Killed
 
ps -ef | grep number | grep -v grep
docker 24520 1 0 22:38 pts/1 00:00:00 I am number 19996889
docker 24521 1 0 22:38 pts/1 00:00:00 I am number 5022831
docker 24522 1 0 22:38 pts/1 00:00:00 I am number 1363197
docker 24523 1 0 22:38 pts/1 00:00:00 I am number 18185959
docker 24524 1 0 22:38 pts/1 00:00:00 I am number 10996678
 
# manual killing those processes;
ps -ef | grep number | grep -v grep | gawk '{print $2}' | xargs kill -9

 
ps -ef | grep number | grep -v grep
<empty>
 
# this works too:
kill -9 $(pgrep -f "I am number [0-9]+$")
# or, shorter:
pkill -f "I am number [0-9]+$"

Note that there is a simpler way to kill those related processes: by using their PGID, or process group id:

ps -axjf | grep number | grep -v grep
1 25248 25221 24997 pts/1 24997 S 1000 0:00 I am number 3489651
1 25249 25221 24997 pts/1 24997 S 1000 0:00 I am number 6789321
1 25250 25221 24997 pts/1 24997 S 1000 0:00 I am number 15840638
1 25251 25221 24997 pts/1 24997 S 1000 0:00 I am number 19059205
1 25252 25221 24997 pts/1 24997 S 1000 0:00 I am number 12857603
# processes have been reparented to PPID == 1;
# highlighted columns 3 is the PGID;
# kill them using negative-PGID;
kill -9 -25221
ps -axjf | grep number | grep -v grep
<empty>

This is why the status() commands displays the PGID.
In order to tell kill that the given PID is actually a PGID, it has to be prefixed with a minus sign. Alternatively, the command:

pkill -g pgid

does that too.
All this looks quite promising so far !
Please, join me now to part II of this article for the dockerization of the test script.

Cet article How to stop Documentum processes in a docker container, and more (part I) est apparu en premier sur Blog dbi services.

How to stop Documentum processes in a docker container, and more (part II)

$
0
0

ok, Ok, OK, and the docker part ?

In a minute.
In part I of this 2-part article, we showed how traps could be used to control a running executable from the outside. We also presented a bash test script to try out and play with traps. Now that we are confident about that simulation script, let’s dockerize it and try it out in this new environment. We use the dockerfile Dockerfile-dctm to create the CS image and so we include an ENTRYPOINT clause as follows:

FROM ubuntu:latest
RUN apt-get update &&      \
    apt-get install -y gawk
COPY dctm.sh /root/.
ENTRYPOINT ["/root/dctm.sh", "start"]

The above ENTRYPOINT syntax allows to run the dctm.sh script with PID 1 because the initial bash process (which runs with PID 1 obviously) performs an exec call to load and execute that script. To keep the dockerfile simple, the script will run as root. In the real-world, CS processes run as something like dmadmin, so this account would have to be set up in the dockerfile (or through some orchestration software).
When the docker image is run or the container is started, the dctm.sh script gets executed with PID 1; as the script is invoked with the start option, it starts the processes. Afterwards, it justs sits there waiting for the SIGTERM signal from the docker stop command; once received, it shuts down all the running processes under its control and exits, which will also stop the container’s process. Additionaly, it can listen and react to some other signals, just like when it runs outside of a container.

Testing

Let’s test this approach with a container built using the above simple Dockerfile-dctm. Since the container is started in interactive mode, its output is visible on the screen and the commands to test it have to be sent from another terminal session; as before, for clarity, the commands have been inserted in the transcript as comments right before their result.

docker build -f Dockerfile-dctm --tag=dctm .
Sending build context to Docker daemon 6.656kB
Step 1/5 : FROM ubuntu:latest
---> 1d9c17228a9e
Step 2/5 : RUN apt-get update && apt-get install -y gawk
---> Using cache
---> f550d88161b6
Step 3/5 : COPY dctm.sh /root/.
---> e15e3f4ea93c
Step 4/5 : HEALTHCHECK --interval=5s --timeout=2s --retries=1 CMD grep -q OK /tmp/status || exit 1
---> Running in 0cea23cec09e
Removing intermediate container 0cea23cec09e
---> f9bf4138eb83
Step 5/5 : ENTRYPOINT ["/root/dctm.sh", "start"] ---> Running in 670c5231d5d8
Removing intermediate container 670c5231d5d8
---> 27991672905e
Successfully built 27991672905e
Successfully tagged dctm:latest
 
# docker run -i --name=dctm dctm
process started with pid 9 and random value 32760057
process started with pid 10 and random value 10364519
process started with pid 11 and random value 2915264
process started with pid 12 and random value 3744070
process started with pid 13 and random value 23787621
5 processes started
 
started processes:
1 9 1 1 ? -1 S 0 0:00 I am number 32760057
1 10 1 1 ? -1 S 0 0:00 I am number 10364519
1 11 1 1 ? -1 S 0 0:00 I am number 2915264
1 12 1 1 ? -1 S 0 0:00 I am number 3744070
1 13 1 1 ? -1 S 0 0:00 I am number 23787621
 
send signal SIGURG for help
send signal SIGPWR to start a few processes
send signal SIGUSR1 to start a new process
send signal SIGUSR2 for the list of started processes
send signal SIGINT | SIGABRT to stop all the processes
send signal SIGHUP | SIGQUIT | SIGTERM to shutdown the processes and exit the container
 
# docker kill --signal=SIGUSR2 dctm
status of running processes at 2019/04/06 14:56:14
showing 32760057
random value 32760057 is used by process with pid 9 and pgid 1
showing 10364519
random value 10364519 is used by process with pid 10 and pgid 1
showing 2915264
random value 2915264 is used by process with pid 11 and pgid 1
showing 3744070
random value 3744070 is used by process with pid 12 and pgid 1
showing 23787621
random value 23787621 is used by process with pid 13 and pgid 1
5 processes found
 
# docker kill --signal=SIGURG dctm
send signal SIGURG for help
send signal SIGPWR to start a few processes
send signal SIGUSR1 to start a new process
send signal SIGUSR2 for the list of started processes
send signal SIGINT | SIGABRT to stop all the processes
send signal SIGHUP | SIGQUIT | SIGTERM to shutdown the processes and exit the container
 
# docker kill --signal=SIGUSR1 dctm
starting a new process at 2019/04/06 14:57:30
process started with pid 14607 and random value 10066771
 
# docker kill --signal=SIGABRT dctm
shutting down the processes at 2019/04/06 14:58:12
stopping 32760057
stopping 10364519
stopping 2915264
stopping 3744070
stopping 23787621
stopping 10066771
6 processes stopped
 
# docker kill --signal=SIGUSR2 dctm
status of running processes at 2019/04/06 14:59:01
0 processes found
 
# docker kill --signal=SIGTERM dctm
shutting down the container at 2019/04/06 14:59:19
 
shutting down the processes at 2019/04/06 14:59:19
0 processes stopped
 
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

We observe exactly the same behavior as the stand-alone dctm.sh, that’s comforting.
Moreover, when the container is stopped, the signal is trapped correctly by the proxy:

...
random value 14725194 is used by process with pid 29 and pgid 1
showing 12554300
random value 12554300 is used by process with pid 30 and pgid 1
5 processes found
 
# date -u +"%Y/%m/%d %H:%M:%S"; docker stop dctm
# 2019/04/10 22:51:47
# dctm
shutting down the container at 2019/04/10 22:51:47
 
shutting down the processes at 2019/04/10 22:51:47
stopping 36164161
stopping 6693775
stopping 11404415
stopping 14725194
stopping 12554300
5 processes stopped

The good thing is that if the docker daemon is stopped at the host level, either interactively or at system shut down, the daemon first sends a SIGTERM to every running container:

date --utc +"%Y/%m/%d %H-%M-%S"; sudo systemctl stop docker
2019/04/06 15-02-18
[sudo] password for docker:

and on the other terminal:

shutting down the container at 2019/04/06 15:02:39
 
shutting down the processes at 2019/04/06 15:02:39
stopping 17422702
stopping 30251419
stopping 14451888
stopping 14890733
stopping 1105445
5 processes stopped

so each container can process the signal accordingly to its needs. Our future Documentum container is now ready for a clean shutdown.

Doing something useful instead of sitting idle: light monitoring

As said, the proxy script waits for a signal from within a loop; the action performed inside the loop is waiting for an input from stdin, which is not particularly useful. Why not taking advantage of this slot to make it do something useful like a monitoring of the running processes ? Such a function already exists in the script, it’s status_all(). Thus, let’s set this up:

# while true; do read; done
# do something useful instead;
while true; do
status_all
sleep 30
done

We quickly notice that their processing is not so briskly any more. In effect, bash waits before processing a signal until the command currently executing completes, here any command inside the loop, so a slight delay is perceptible before our signals get care of, especially if we are in the middle of a ‘sleep 600’ command. Moreover, incoming signals are not stacked up but they replace each one another until the most recent one only is processed. In practical conditions, this is not a problem for it is still possible to send signals and have them processed, just not in burst mode. If a better reactivity to signals is needed, the sleep duration should be shortened and/or a separate scheduling of the monitoring be introduced (started asynchronously from a loop in the entrypoint or from a crontab inside the container ?).
Note that the status send to stdout from within a detached container (i.e. started without the -i for interactive option, which is generally the case) is not visible outside a container. Fortunately, and even better, the docker logs command makes it possible to view on demand the status output:

docker logs --follow container_name

In our case:

docker logs --follow dctm
status of running processes at 2019/04/06 15:21:21
showing 8235843
random value 8235843 is used by process with pid 8 and pgid 1
showing 16052839
random value 16052839 is used by process with pid 9 and pgid 1
showing 1097668
random value 1097668 is used by process with pid 10 and pgid 1
showing 5113933
random value 5113933 is used by process with pid 11 and pgid 1
showing 1122110
random value 1122110 is used by process with pid 12 and pgid 1
5 processes found

Note too that the logs commands also has a timestamps option for prefixing the lines output with the time they were produced, as illustrated below:

docker logs --timestamps --since 2019-04-06T18:06:23 dctm
2019-04-06T18:06:23.607796640Z status of running processes at 2019/04/06 18:06:23
2019-04-06T18:06:23.613666475Z showing 7037074
2019-04-06T18:06:23.616334029Z random value 7037074 is used by process with pid 8 and pgid 1
2019-04-06T18:06:23.616355592Z showing 33446655
2019-04-06T18:06:23.623719975Z random value 33446655 is used by process with pid 9 and pgid 1
2019-04-06T18:06:23.623785755Z showing 17309380
2019-04-06T18:06:23.627050839Z random value 17309380 is used by process with pid 10 and pgid 1
2019-04-06T18:06:23.627094599Z showing 13859725
2019-04-06T18:06:23.630436025Z random value 13859725 is used by process with pid 11 and pgid 1
2019-04-06T18:06:23.630472176Z showing 26767323
2019-04-06T18:06:23.633304616Z random value 26767323 is used by process with pid 12 and pgid 1
2019-04-06T18:06:23.635900480Z 5 processes found
2019-04-06T18:06:26.640490424Z

This is handy, but still not perfect, for those cases where lazy programmers neglect to date their logs’ entries.
Now, since we have a light-weight monitoring in place, we can use it in the dockerfile’s HEALTHCHECK clause to show the container’s status through the ps command. As the processes’ status is already determined in the wait loop of the dctm.sh script, it is pointless to compute it again. Instead, we can modify status_all() to print the overall status in a file, say in /tmp/status, so that HEALTHCHECK can read it later every $INTERVAL period. If status_all() is invoked every $STATUS_PERIOD, a race condition can occur every LeastCommonMultiple($INTERVAL, $STATUS_PERIOD), i.e. when these 2 processes will access the file simultaneously, the former in reading mode and the latter in writing mode. To avoid this nasty situation, status_all() will first write into /tmp/tmp_status and later rename this file to /tmp/status. For the sake of our example, let’s decide that the container is unhealthy if there are no dummy processes running, and healthy if there is at least one running (in real conditions, the container would be healthy if ALL the processes are responding, and unhealthy if ANY of them is not but it also depends on the definition of health). Here is the new dctm.sh’s status_all() function:

status_all() {
   echo; echo "status of running processes at $(date +"%Y/%m/%d %H:%M:%S")"
   nb_processes=0
   for no in $(ps -ef | grep "I am number " | grep -v grep | gawk '{print $NF}'); do
      echo "showing $no"
      func status $no
      (( nb_processes++ ))
   done
   echo "$nb_processes processes found"
   if [[ $nb_processes -eq 0 ]]; then
      printf "status: bad\n" > /tmp/tmp_status
   else
      printf "status: OK\n" > /tmp/tmp_status
   fi
   mv /tmp/tmp_status /tmp/status
}

Here is the new dockerfile:

FROM ubuntu:latest
RUN apt-get update &&      \
    apt-get install -y gawk
COPY dctm.sh /root/.
HEALTHCHECK --interval=10s --timeout=2s --retries=2 CMD grep -q OK /tmp/status || exit 1
ENTRYPOINT ["/root/dctm.sh", "start"]

Here is what the ps commands shows now:

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64e22a8f75cd dctm "/root/dctm.sh start" 38 minutes ago Up 2 seconds (health: starting) dctm
 
...
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64e22a8f75cd dctm "/root/dctm.sh start" 38 minutes ago Up 6 seconds (healthy) dctm

A new column, STATUS, is displayed showing the container’s current health status.
If a new built is unwanted, the clause can be specified when running the image:

docker run --name dctm --health-cmd "grep -q OK /tmp/status || exit 1" --health-interval=10s --health-timeout=2s --health-retries=1 dctm

Note how these parameters are now prefixed with “health-” so they can be related to the HEALTHCHECK clause.
Now, in order to observe how the status is updated, let’s play with the signals INT and PWR to respectively stop and launch processes inside the container:

# current situation:
docker logs dctm
status of running processes at 2019/04/12 14:05:00
showing 29040429
random value 29040429 is used by process with pid 1294 and pgid 1
showing 34302125
random value 34302125 is used by process with pid 1295 and pgid 1
showing 2979702
random value 2979702 is used by process with pid 1296 and pgid 1
showing 4661756
random value 4661756 is used by process with pid 1297 and pgid 1
showing 7169283
random value 7169283 is used by process with pid 1298 and pgid 1
5 processes found
 
# show status:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff25beae71f0 dctm "/root/dctm.sh start" 55 minutes ago Up 55 minutes (healthy) dctm
 
# stop the processes:
docker kill --signal=SIGINT dctm
# wait up to the given health-interval and check again:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff25beae71f0 dctm "/root/dctm.sh start" 57 minutes ago Up 57 minutes (unhealthy) dctm
 
# restart the processes:
docker kill --signal=SIGPWR dctm
 
# wait up to the health-interval and check again:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff25beae71f0 dctm "/root/dctm.sh start" About an hour ago Up About an hour (healthy) dctm

The healthstatus command works as expected.
Note that the above HEALTHCHECK successful tests were done under Centos Linux release 7.6.1810 (Core) with docker Client Version 1.13.1 and API version 1.26, Server Version 1.13.1 and API version 1.26 (minimum version 1.12).
The HEALTHCHECK clause looks broken under Ubuntu 18.04.1 LTS with docker Client Version 18.09.1 and API version 1.39, Server Engine – Community Engine Version 18.09.1 and API version 1.39 (minimum version 1.12). After a change of status, HEALTHCHECK sticks to the unhealthy state and “docker ps” always shows “healthy” no matter the following changes in the running processes inside the container. It looks like the monitoring cycles until an unhealthy condition occurs, then it stops cycling and stays in the unhealthy state, as it is also visible in the timestamps by inspecting the container’s status:

docker inspect --format='{{json .State.Health}}' dctm
{"Status":"unhealthy","FailingStreak":0,"Log":[{"Start":"2019-04-12T16:04:18.995957081+02:00","End":"2019-04-12T16:04:19.095540448+02:00","ExitCode":0,"Output":""},
{"Start":"2019-04-12T16:04:21.102151004+02:00","End":"2019-04-12T16:04:21.252025292+02:00","ExitCode":0,"Output":""},
{"Start":"2019-04-12T16:04:23.265929424+02:00","End":"2019-04-12T16:04:23.363387974+02:00","ExitCode":0,"Output":""},
{"Start":"2019-04-12T16:04:25.372757042+02:00","End":"2019-04-12T16:04:25.471229004+02:00","ExitCode":0,"Output":""},
{"Start":"2019-04-12T16:04:27.47692396+02:00","End":"2019-04-12T16:04:27.580458001+02:00","ExitCode":0,"Output":""}]}

The last 5 entries stop being updated.
While we are mentioning bugs, “docker logs –tail 0 dctm” under Centos displays the whole log available so far, so specify 1 at least to reduce the output of the log history to a minimum. Under Ubuntu, it works as expected though. However, the “–follow” option works under Centos but not under Ubuntu. So, there is some instability here; be prepared to comprehensively test every docker’s feature to be used.

Using docker’s built-in init process

As said above, docker does not have a full-fledged init process like systemd but still offers something vaguely related, tini, which stands for “tiny init”, see here. It wont’t solve the inability of Documentum’s processes to respond to signals and therefore the proxy script is still needed. However, in addition to forwarding signals to its child process, tini has the advantage of taking care of defunct processes, or zombies, by reaping them up regularly. Documentum produces a lot of them and they finish up disappearing in the long run. Still, tini could speeds this up a little bit.
tini can be invoked from the command-line as follows:

docker run -i --name=dctm --init dctm

But it is also possible to integrate it directly in the dockerfile so the −−init option won’t be needed any longer (and shouldn’t be used otherwise tini will not be PID 1 and its reaping feature won’t be possible anymore, making it useless for us):

FROM ubuntu:latest
COPY dctm.sh /root/.
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN apt-get update &&      \
    apt-get install -y gawk &&      \
    chmod +x /tini
HEALTHCHECK --interval=10s --timeout=2s --retries=2 CMD grep -q OK /tmp/status || exit 1
ENTRYPOINT ["/tini", "--"]
# let tini launch the proxy;
CMD ["/root/dctm.sh", "start"]

Let’s build the image with tini:

docker build -f Dockerfile-dctm --tag=dctm:with-tini .
Sending build context to Docker daemon 6.656kB
Step 1/8 : FROM ubuntu:latest
---> 1d9c17228a9e
Step 2/8 : COPY dctm.sh /root/.
---> a724637581fe
Step 3/8 : ENV TINI_VERSION v0.18.0
---> Running in b7727fc065e9
Removing intermediate container b7727fc065e9
---> d1e1a17d7255
Step 4/8 : ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
Downloading [==================================================>] 24.06kB/24.06kB
---> 47b1fc9f82c7
Step 5/8 : RUN apt-get update && apt-get install -y gawk && chmod +x /tini
---> Running in 4543b6f627f3
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB] ...
Step 6/8 : HEALTHCHECK --interval=5s --timeout=2s --retries=1 CMD grep -q OK /tmp/status || exit 1
---> Running in d2025cbde647
Removing intermediate container d2025cbde647
---> a17fd24c4819
Step 7/8 : ENTRYPOINT ["/tini", "--"] ---> Running in ee1e10062f22
Removing intermediate container ee1e10062f22
---> f343d21175d9
Step 8/8 : CMD ["/root/dctm.sh", "start"] ---> Running in 6d41f591e122
Removing intermediate container 6d41f591e122
---> 66541b8c7b37
Successfully built 66541b8c7b37
Successfully tagged dctm:with-tini

Let’s run the image:

docker run -i --name=dctm dctm:with-tini
 
starting a few processes at 2019/04/07 11:55:30
process started with pid 9 and random value 23970936
process started with pid 10 and random value 35538668
process started with pid 11 and random value 12039907
process started with pid 12 and random value 21444522
process started with pid 13 and random value 7681454
5 processes started
...

And let’s see how the container’s processes look like with tini from another terminal:

docker exec -it dctm /bin/bash
ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 11:55 ? 00:00:00 /tini -- /root/dctm.sh start
root 6 1 0 11:55 ? 00:00:00 /bin/bash /root/dctm.sh start
root 9 6 0 11:55 ? 00:00:00 I am number 23970936
root 10 6 0 11:55 ? 00:00:00 I am number 35538668
root 11 6 0 11:55 ? 00:00:00 I am number 12039907
root 12 6 0 11:55 ? 00:00:00 I am number 21444522
root 13 6 0 11:55 ? 00:00:00 I am number 7681454
root 174 0 0 11:55 ? 00:00:00 /bin/bash
root 201 6 0 11:55 ? 00:00:00 sleep 3
root 208 174 0 11:55 ? 00:00:00 ps -ef
...

So tini is really running with PID == 1 and has started the proxy as its child process as expected.
Let’s test the container by sending a few signals:

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f745485a907 dctm:with-tini "/tini -- /root/dctm…" 8 seconds ago Up 7 seconds (healthy) dctm
 
# docker kill --signal=SIGINT dctm
shutting down the processes at 2019/04/07 11:59:42
stopping 23970936
stopping 35538668
stopping 12039907
stopping 21444522
stopping 7681454
5 processes stopped
 
status of running processes at 2019/04/07 11:59:42
0 processes found
 
status of running processes at 2019/04/07 11:59:45
0 processes found
 
# docker kill --signal=SIGTERM dctm
shutting down the processes at 2019/04/07 12:00:00
0 processes stopped

and then the container gets stopped. So, the signals are well transmitted to tini’s child process.
If one prefers to use the run’s −−init option instead of modifying the dockerfile and introduce tini as the ENTRYPOINT, it is even better because we will have only one version of the dockerfile to maintain. Here is the invocation and how the processes will look like:

docker run --name=dctm --init dctm
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d9fa0d98817 dctm "/root/dctm.sh start" 4 seconds ago Up 3 seconds (health: starting) dctm
docker exec dctm /bin/bash -c "ps -ef"
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 12:11 ? 00:00:00 /dev/init -- /root/dctm.sh start
root 6 1 0 12:11 ? 00:00:00 /bin/bash /root/dctm.sh start
root 9 6 0 12:11 ? 00:00:00 I am number 23850948
root 10 6 0 12:11 ? 00:00:00 I am number 19493606
root 11 6 0 12:11 ? 00:00:00 I am number 34535435
root 12 6 0 12:11 ? 00:00:00 I am number 32571187
root 13 6 0 12:11 ? 00:00:00 I am number 35596440
root 116 0 1 12:11 ? 00:00:00 /bin/bash
root 143 6 0 12:11 ? 00:00:00 sleep 3
root 144 116 0 12:11 ? 00:00:00 ps -ef

It looks even better; tini is still there – presumably – but hidden behind /dev/init so the container will be immune to any future change in the default init process.

Adapting dctm.sh for Documentum

Adapting the proxy script to a real Documentum installation with its own central stop/start/status script, let’s name it dctm_stop_start.sh, is easy. The main changes are limited to the func() function; now, it just relays the commands to the script dctm_stop_start.sh:

#!/bin/bash
# launches in the background, or stops or queries the status of, the Documentum dctm_start_stop.sh script;
# Usage:
#   ./dctm.sh stop | start | status
# e.g.:
# $ ./dctm.sh start
# cec - dbi-services - April 2019
#
trap 'help'         SIGURG
trap 'start_all'    SIGPWR
trap 'status_all'   SIGUSR2
trap 'stop_all'     SIGINT SIGABRT
trap 'shutdown_all' SIGHUP SIGQUIT SIGTERM

verb="sleep"
export id_prefix="I am number"

func() {
   cmd="$1"
   case $cmd in
      start)
         ./dctm_start_stop.sh start &
         ;;
      stop)
         ./dctm_start_stop.sh stop &
         ;;
      status)
         ./dctm_start_stop.sh status
         return $?
         ;;
   esac
}

help() {
   echo
   echo "send signal SIGURG for help"
   echo "send signal SIGPWR to start the Documentum processes"
   echo "send signal SIGUSR2 for the list of Documentum started processes"
   echo "send signal SIGINT | SIGABRT to stop all the processes"
   echo "send signal SIGHUP | SIGQUIT | SIGTERM to shutdown the Documentum processes and exit the container"
}

start_all() {
   echo; echo "starting the Documentum processes at $(date +"%Y/%m/%d %H:%M:%S")"
   func start
}

status_all() {
   echo; echo "status of Documentum processes at $(date +"%Y/%m/%d %H:%M:%S")"
   func status
   if [[ $? -eq 0 ]]; then
      printf "status: bad\n" > /tmp/tmp_status
   else
      printf "status: OK\n" > /tmp/tmp_status
   fi
   mv /tmp/tmp_status /tmp/status
}

stop_all() {
   echo; echo "shutting down the Documentum processes at $(date +"%Y/%m/%d %H:%M:%S")"
   func stop
}

shutdown_all() {
   echo; echo "shutting down the container at $(date +"%Y/%m/%d %H:%M:%S")"
   stop_all
   exit 0
}

# -----------
# main;
# -----------

# starts a few dummy processes;
[[ "$1" = "start" ]] && start_all

# make sure the container stays up and waits for signals;
while true; do status_all; sleep 3; done

Here is a skeleton of the script dctm_start_stop.sh:

#!/bin/bash
   cmd="$1"
   case $cmd in
      start)
         # insert here your Documentum installation's start scripts, e.g.
         # /app/dctm/dba/dm_launch_Docbroker
         # /app/dctm/dba/dm_start_testdb
         # /app/dctm/shared/wildfly9.0.1/server/startMethodServer.sh &
         echo "started"
         ;;
      stop)
         # insert here your Documentum installation's stop scripts, e.g.
         # /app/dctm/shared/wildfly9.0.1/server/stopMethodServer.sh
         # /app/dctm/dba/dm_shutdown_testdb
         # /app/dctm/dba/dm_stop_Docbroker
         echo "stopped"
         ;;
      status)
         # insert here your statements to test the Documentum processes's health;
         # e.g. dmqdocbroker -c -p 1489 ....
         # e.g. idql testdb -Udmadmin -Pxxx to try to connect to the docbase;
         # e.g. wget http://localhost:9080/... to test the method server;
         # 0: OK, 1: NOK;
         exit 0
         ;;
   esac

Let’s introduce a slight modification in the dockerfile’s entrypoint clause: instead of having the Documentum processes start at container startup, the container will start with only the proxy running inside. Only upon receiving the signal SIGPWR will the proxy start all the Documentum processes:

ENTRYPOINT ["/root/dctm.sh", ""]

If the light-weight monitoring is in action, the container will be flagged unhealthy but this can be an useful reminder.
Note that the monitoring could be activated or deactivated through a signal as showed in the diff output below:

diff dctm-no-monitoring.sh dctm.sh
21a22
> trap 'status_on_off' SIGCONT
24a26
> bStatus=1
119a122,125
> status_on_off() {
>    (( bStatus = (bStatus + 1) % 2 ))
> }
> 
132c138
    [[ $bStatus -eq 1 ]] && status_all

This is more flexible and better matches the reality.

Shortening docker commands

We have thrown a lot of docker commands at you. If they are used often, their verbosity can be alleviated through aliases, e.g.:

alias di='docker images'
alias dpsas='docker ps -as'
alias dps='docker ps'
alias dstatus='docker kill --signal=SIGUSR2'
alias dterm='docker kill --signal=SIGTERM'
alias dabort='docker kill --signal=SIGABRT'
alias dlogs='docker logs --follow'
alias dstart='docker start'
alias dstarti='docker start -i'
alias dstop="docker stop"
alias drm='docker container rm'

or even bash functions for the most complicated ones (to be appended into e.g. your ~/.bashrc):

function drun {
image="$1"
docker run -i --name=$image $image
}
 
function dkill {
signal=$1
container=$2
docker kill --signal=$signal $container
}
 
function dbuild {
docker build -f Dockerfile-dctm --tag=$1 .
}

The typical sequence for testing the dockerfile Dockerfile-dctm to produce the image dctm and run it as the dctm container is:

dbuild dctm
drm dctm
drun dctm

Much less typing.

Conclusion

At the end of the day, it is no such a big deal that the Documentum CS does not process signals sent to it for it is easy to work around this omission and even go beyond the basic stops and starts. As always, missing features or shortcomings become a source of inspiration and enhancements !
Containerization has lots of advantages but we have noticed that docker’s implementations vary between versions and platforms so some features don’t always work as expected, if at all.
In a future blog, I’ll show a containerization of the out of the box CS that includes signal trapping. In the meantime, live long and don’t despair.

Cet article How to stop Documentum processes in a docker container, and more (part II) est apparu en premier sur Blog dbi services.


Documentum – RCS/CFS installation failure

$
0
0

A few weeks ago, I had a task to add a new CS into already HA environments (DEV/TEST/PROD) to better support the load on these environments as well as adding a new repository on all Content Servers. These environments were installed a nearly two years ago already so it was really just adding something new into the picture. When doing so, the installation of a new repository on existing Content Servers (CS1 / CS2) was successful and without much trouble (installation in silent obviously so it’s fast & reliable for the CS and RCS) but then the new Remote Content Server (RCS/CFS – CS3) installation, using the same silent scripts, failed for the two existing/old repositories while it succeeded for the new one.

Well actually, the CFS installation didn’t completely fail. The silent installer returned the prompt properly, the repository start/stop scripts were present, the config folder was present, the dm_server_config object was there, aso… So it looked like the installation was successful but, as a best practice, it is really important to always check the log file for a silent installation because it doesn’t show anything on the prompt, even if there are errors. So while checking at the log file after the silent installer returned the prompt, I saw the following:

[dmadmin@content_server_03 ~]$ cd $DM_HOME/install/logs/
[dmadmin@content_server_03 logs]$ cat install.log
15:12:31,830  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
15:12:31,870  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsInitializeImportantServerVariables - The installer is gathering system configuration information.
15:12:31,883  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Start to verify the password
15:12:33,259  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:12:33,635  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair - generated RSA (2,048-bit strength) mutiformat key pair in 352 ms
15:12:33,667  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential - certificate created for DFC <CN=dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa,O=EMC,OU=Documentum> valid from Fri Feb 01 15:07:33 UTC 2019 to Mon Jan 29 15:12:33 UTC 2029:

15:12:33,668  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:12:33,681  INFO [main] com.documentum.fc.client.security.impl.InitializeKeystoreForDfc - [DFC_SECURITY_IDENTITY_INITIALIZED] Initialized new identity in keystore, DFC alias=dfc, identity=dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:12:33,682  INFO [main] com.documentum.fc.client.security.impl.AuthenticationMgrForDfc - identity for authentication is dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:12:33,687  INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 7.3.0040.0025
15:12:33,939  INFO [Timer-2] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
15:12:34,717  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:12:34,758  INFO [main] com.documentum.fc.client.security.internal.AuthenticationMgr - new identity bundle <dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa   1549033954      content_server_03.dbi-services.com         hicAAvU7QX3VNvDft2PwmnW4SIFX+5Snx7PlA5hryuOpo2eWLcEANYAEwYBbU6F3hEBAMenRR/lXFrHFqlrxTZl54whGL+9VnH6CCEu4x8dxdQ+QLRE3EtLlO31SPNhqkzjyVwhktNuivhiZkxweDNynvk+pDleTPvzUvF0YSoggcoiEq+kGr6/c9vUPOMuuv1k7PR1AO05JHmu7vea9/UBaV+TFA6/cGRwVh5i5D2s1Ws7qiDlBl4R+Wp3+TbNLPjbn/SeOz5ZSjAmXThK0H0RXwbcwHo9bVm0Hzu/1n7silII4ZzjAW7dd5Jvbxb66mxC8NWaNabPksus2mTIBhg==>
15:12:35,002  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:12:35,119  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: false
15:12:36,317  INFO [main] com.documentum.fc.client.privilege.impl.PublicKeyCertificate - stored certificate for CN
15:12:36,353  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling in GR_DocBase a new record with this persistent certificate:
-----BEGIN CERTIFICATE-----
MIIDHzCCAgcCELGIh8FYcycggMmImLESjEYwDQYJKoZIhvcNAQELBQAwTjETMBEG
YXZxbFJuN1lRZFlUTXRQNnBWNnpRY3JBYTAeFw0xOTAyMDExNTA3MzNaFw0yOTAx
MjkxNTEyMzNaME4xEzARBgNVBAsMCkRvY3VtZW50dW0xDDAKBgNVBAoMA0VNQzEp
hKnQmaMo/wCv+QXZTCsitrBNvoomcT82mYzwIxV5/7cPCIHHMcJijsJCtunjiucV
MCcGA1UEAwwgZGZjX1VuSWF2cWxSbjdZUWRZVE10UDZwVjZ6UWNyQWEwggEiMA0G
HcL0KUImSV7owDqKzV3lEYCGdomX4gYTI5bMKAiTEuGyWRKw2YTQGhfp5y0mU0hV
ORTYyRoGjpRUuXWpdrsrbX8g8gD9l6ijWTSIWfTGO/7//mTHp2zwp/TiIEuAS+RA
eFw1pBLSCKneYgquMuiyFfuCfBVNY5Q0MzyPHYxrDAp4CtjasIrNT5h3AgMBAAEw
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4Hli+niUAD0ksVVWocPnvzV10ZOj2
DQYJKoZIhvcNAQELBQADggEBAEAre45NEpqzGMMYX1zpjgib9wldSmiPVDZbhj17
KnUCgDy7FhFQ5U5w6wf2iO9UxGV42AYQe2TjED0EbYwpYB8DC970J2ZrjZRFMy/Y
A1UECwwKRG9jdW1lbnR1bTEMMAoGA1UECgwDRU1DMSkwJwYDVQQDDCBkZmNfVW5J
gwKynVf9O10GQP0a8Z6Fr3jrtCEzfLjOXN0VxEcgwOEKRWHM4auxjevqGCPegD+y
FVWwylyIsMRsC9hOxoNHZPrbhk3N9Syhqsbl+Z9WXG0Sp4uh1z5R1NwVhR7YjZkF
19cfN8uEHqedJo26lq7oFF2KLJ+/8sWrh2a6lrb4fNXYZIAaYKjAjsUzcejij8en
Rd8yvghCc4iwWvpiRg9CW0VF+dXg6KkQmaFjiGrVosskUjuACHncatiYC5lDNJy+
TDdnNWYlctfWcT8WL/hX6FRGedT9S30GShWJNobM9vECoNg=
-----END CERTIFICATE-----
15:12:36,355  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: false
15:12:36,535  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling a new registration record for dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:12:36,563  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - [DFC_SECURITY_GR_REGISTRATION_PUBLISH] this dfc instance is now published in the global registry GR_DocBase
15:12:37,513  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:12:38,773  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:12:39,314  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is adding it as primary connection broker and moves existing primary as backup.
15:12:41,643  INFO [main]  - The installer updates dfc.properties file.
15:12:41,644  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is adding it as primary connection broker and moves existing primary as backup.
15:12:41,649  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - The installer will validate AEK/Lockbox fileds.
15:12:41,656  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is changing primary as backup and backup as primary.
15:12:43,874  INFO [main]  - The installer updates dfc.properties file.
15:12:43,874  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is changing primary as backup and backup as primary.
15:12:43,876  INFO [main]  - The installer is creating folders for the selected repository.
15:12:43,876  INFO [main]  - Checking if cfs is being installed on the primary server...
15:12:43,877  INFO [main]  - CFS is not being installed on the primary server
15:12:43,877  INFO [main]  - Installer creates necessary directory structure.
15:12:43,879  INFO [main]  - Installer copies aek.key, server.ini, dbpasswd.txt and webcache.ini files from primary server.
15:12:43,881  INFO [main]  - Installer executes dm_rcs_copyfiles.ebs to get files from primary server
15:12:56,295  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/dbpasswd.txt has been created successfully
15:12:56,302  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/webcache.ini has been created successfully
15:12:56,305  INFO [main]  - Installer found exising file $DOCUMENTUM/dba/secure/lockbox.lb
15:12:56,305  INFO [main]  - Installer renamed exising file $DOCUMENTUM/dba/secure/lockbox.lb to $DOCUMENTUM/dba/secure/lockbox.lb.bak.3
15:12:56,306  INFO [main]  - $DOCUMENTUM/dba/secure/lockbox.lb has been created successfully
15:12:56,927  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/server_content_server_03_DocBase1.ini has been created successfully
15:12:56,928  INFO [main]  - Installer found exising file $DOCUMENTUM/dba/castore_license
15:12:56,928  INFO [main]  - Installer renamed exising file $DOCUMENTUM/dba/castore_license to $DOCUMENTUM/dba/castore_license.bak.3
15:12:56,928  INFO [main]  - $DOCUMENTUM/dba/castore_license has been created successfully
15:12:56,931  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/ldap_080f123450006deb.cnt has been created successfully
15:12:56,934  INFO [main]  - Installer updates server.ini
15:12:56,940  INFO [main]  - The installer tests database connection.
15:12:57,675  INFO [main]  - Database successfully opened.
Test table successfully created.
Test view successfully created.
Test index successfully created.
Insert into table successfully done.
Index successfully dropped.
View successfully dropped.
Database case sensitivity test successfully past.
Table successfully dropped.
15:13:00,675  INFO [main]  - The installer creates server config object.
15:13:00,853  INFO [main]  - The installer is starting a process for the repository.
15:13:01,993  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:03,079  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:04,149  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:05,187  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:06,256  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:14:06,352  INFO [main]  - Waiting for repository DocBase1.content_server_03_DocBase1 to start up.
15:14:25,003  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:14:25,495  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:14:25,498  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:14:25,513  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: true
15:14:25,672  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - assigning rights to all roles for this client on DocBase1
15:14:25,682  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - found client rights: false
15:14:25,736  INFO [main] com.documentum.fc.client.privilege.impl.PublicKeyCertificate - stored certificate for CN
15:14:25,785  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling in DocBase1 a new record with this persistent certificate:
-----BEGIN CERTIFICATE-----
MIIDHzCCAgcCELGIh8FYcycggMmImLESjEYwDQYJKoZIhvcNAQELBQAwTjETMBEG
YXZxbFJuN1lRZFlUTXRQNnBWNnpRY3JBYTAeFw0xOTAyMDExNTA3MzNaFw0yOTAx
MjkxNTEyMzNaME4xEzARBgNVBAsMCkRvY3VtZW50dW0xDDAKBgNVBAoMA0VNQzEp
hKnQmaMo/wCv+QXZTCsitrBNvoomcT82mYzwIxV5/7cPCIHHMcJijsJCtunjiucV
MCcGA1UEAwwgZGZjX1VuSWF2cWxSbjdZUWRZVE10UDZwVjZ6UWNyQWEwggEiMA0G
HcL0KUImSV7owDqKzV3lEYCGdomX4gYTI5bMKAiTEuGyWRKw2YTQGhfp5y0mU0hV
ORTYyRoGjpRUuXWpdrsrbX8g8gD9l6ijWTSIWfTGO/7//mTHp2zwp/TiIEuAS+RA
eFw1pBLSCKneYgquMuiyFfuCfBVNY5Q0MzyPHYxrDAp4CtjasIrNT5h3AgMBAAEw
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4Hli+niUAD0ksVVWocPnvzV10ZOj2
DQYJKoZIhvcNAQELBQADggEBAEAre45NEpqzGMMYX1zpjgib9wldSmiPVDZbhj17
KnUCgDy7FhFQ5U5w6wf2iO9UxGV42AYQe2TjED0EbYwpYB8DC970J2ZrjZRFMy/Y
A1UECwwKRG9jdW1lbnR1bTEMMAoGA1UECgwDRU1DMSkwJwYDVQQDDCBkZmNfVW5J
gwKynVf9O10GQP0a8Z6Fr3jrtCEzfLjOXN0VxEcgwOEKRWHM4auxjevqGCPegD+y
FVWwylyIsMRsC9hOxoNHZPrbhk3N9Syhqsbl+Z9WXG0Sp4uh1z5R1NwVhR7YjZkF
19cfN8uEHqedJo26lq7oFF2KLJ+/8sWrh2a6lrb4fNXYZIAaYKjAjsUzcejij8en
Rd8yvghCc4iwWvpiRg9CW0VF+dXg6KkQmaFjiGrVosskUjuACHncatiYC5lDNJy+
TDdnNWYlctfWcT8WL/hX6FRGedT9S30GShWJNobM9vECoNg=
-----END CERTIFICATE-----
15:14:25,789  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: true
15:14:25,802  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - found client rights: false
15:14:25,981  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling a new rights record for dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:14:26,032  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - [DFC_SECURITY_DOCBASE_RIGHTS_REGISTER] this dfc instance has now escalation rights registered with docbase DocBase1
15:14:26,052  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - setApplicationServer sharedDfcLibDir is:$DOCUMENTUM/shared/dfc
15:14:26,052  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - getFileFromResource for templates/appserver.properties
15:14:26,059  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerAddDocbaseEntryToWebXML - BPM webapp does not exist.
15:14:26,191  INFO [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - Executing the Docbase HeadStart script.
15:14:36,202  INFO [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - Executing the Creates ACS config object script.
15:14:46,688  INFO [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - Executing the This script does miscellaneous setup tasks for remote content servers script.
15:14:56,840 ERROR [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - The installer failed to execute the This script does miscellaneous setup tasks for remote content servers script. For more information, please read output file: $DOCUMENTUM/dba/config/DocBase1/dm_rcs_setup.out.
com.documentum.install.shared.common.error.DiException: The installer failed to execute the This script does miscellaneous setup tasks for remote content servers script. For more information, please read output file: $DOCUMENTUM/dba/config/DocBase1/dm_rcs_setup.out.
        at com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2.setup(DiWAServerProcessingScripts2.java:98)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:75)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        ...
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
15:14:56,843  INFO [main]  - The INSTALLER_UI value is SILENT
15:14:56,843  INFO [main]  - The KEEP_TEMP_FILE value is true
15:14:56,843  INFO [main]  - The common.installOwner.password value is ******
15:14:56,843  INFO [main]  - The SERVER.SECURE.ROOT_PASSWORD value is ******
15:14:56,843  INFO [main]  - The common.upgrade.aek.lockbox value is null
15:14:56,843  INFO [main]  - The common.old.aek.passphrase.password value is null
15:14:56,843  INFO [main]  - The common.aek.algorithm value is AES_256_CBC
15:14:56,843  INFO [main]  - The common.aek.passphrase.password value is ******
15:14:56,843  INFO [main]  - The common.aek.key.name value is CSaek
15:14:56,843  INFO [main]  - The common.use.existing.aek.lockbox value is null
15:14:56,843  INFO [main]  - The SERVER.ENABLE_LOCKBOX value is true
15:14:56,844  INFO [main]  - The SERVER.LOCKBOX_FILE_NAME value is lockbox.lb
15:14:56,844  INFO [main]  - The SERVER.LOCKBOX_PASSPHRASE.PASSWORD value is ******
15:14:56,844  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
15:14:56,844  INFO [main]  - The SERVER.DOCBROKER_ACTION value is null
15:14:56,844  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_HOST value is content_server_01.dbi-services.com
15:14:56,844  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_PORT value is 1489
15:14:56,844  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content_server_03.dbi-services.com
15:14:56,844  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
15:14:56,844  INFO [main]  - The SERVER.FQDN value is content_server_03.dbi-services.com
15:14:56,845  INFO [main]  - The SERVER.DOCBASE_NAME value is DocBase1
15:14:56,845  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is DocBase1
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
15:14:56,845  INFO [main]  - The SERVER.SECURE.REPOSITORY_PASSWORD value is ******
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_HOSTNAME value is content_server_01.dbi-services.com
15:14:56,845  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
15:14:56,845  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_NAME value is
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_PORT value is
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_CONNECT_MODE value is null
15:14:56,846  INFO [main]  - The SERVER.USE_CERTIFICATES value is false
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_FILE_NAME value is null
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME value is null
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_CIPHER_LIST value is null
15:14:56,853  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE value is null
15:14:56,853  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE_PASSWORD value is ******
15:14:56,853  INFO [main]  - The SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE value is null
15:14:56,853  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
15:14:56,854  INFO [main]  - The SERVER.DOCUMENTUM_DATA value is $DATA
15:14:56,854  INFO [main]  - The SERVER.DOCUMENTUM_SHARE value is $DOCUMENTUM/share
15:14:56,854  INFO [main]  - The CFS_SERVER_CONFIG_NAME value is content_server_03_DocBase1
15:14:56,854  INFO [main]  - The SERVER.DOCBASE_SERVICE_NAME value is DocBase1
15:14:56,854  INFO [main]  - The CLIENT_CERTIFICATE value is null
15:14:56,854  INFO [main]  - The RKM_PASSWORD value is ******
15:14:56,854  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED value is null
15:14:56,854  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is null
15:14:56,854  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is null
15:14:56,854  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is null
15:14:56,854  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is null
15:14:56,855  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
15:14:56,855  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
15:14:56,855  INFO [main]  - The SERVER.COMPONENT_NAME value is null
15:14:56,855  INFO [main]  - The SERVER.DOCBASE_NAME value is DocBase1
15:14:56,855  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
15:14:56,855  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
15:14:56,855  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content_server_03.dbi-services.com
15:14:56,855  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
15:14:56,855  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is DocBase1
15:14:56,855  INFO [main]  - The SERVER.DOCBROKER_NAME value is
15:14:56,856  INFO [main]  - The SERVER.DOCBROKER_PORT value is
15:14:56,856  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
15:14:56,856  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED_KEY value is null
15:14:56,856  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is null
15:14:56,856  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is null
15:14:56,856  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is null
15:14:56,856  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is null
15:14:56,856  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
15:14:56,856  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
15:14:56,857  INFO [main]  - The SERVER.COMPONENT_NAME value is null
15:14:56,857  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is DocBase1
15:14:56,857  INFO [main]  - The SERVER.DOCBASE_NAME value is DocBase1
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
15:14:56,857  INFO [main]  - The env PATH value is: /usr/xpg4/bin:$DOCUMENTUM/shared/java64/JAVA_LINK/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DOCUMENTUM/shared/java64/JAVA_LINK/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
[dmadmin@content_server_03 logs]$

 

As you can see above, everything was going well until the script “This script does miscellaneous setup tasks for remote content servers” is executed. Yes that is a hell of a description, isn’t it? What this script is doing is actually running the “dm_rcs_setup.ebs” script (you can find it under $DM_HOME/install/admin/) on the repository to setup the remote jobs, project the RCS/CFS repository to the local docbroker, create the log folder and a few other things. Here was the content of the output file for the execution of this EBS:

[dmadmin@content_server_03 logs]$ cat $DOCUMENTUM/dba/config/DocBase1/dm_rcs_setup.out
Running dm_rcs_setup.ebs script on docbase DocBase1.content_server_03_DocBase1 to set up jobs for a remote content server.
docbaseNameOnly = DocBase1
Connected To DocBase1.content_server_03_DocBase1
$DOCUMENTUM/dba/log/000f1234/sysadmin was created.
Duplicating distributed jobs.
Creating job object for dm_ContentWarningcontent_server_03_DocBase1
Successfully created job object for dm_ContentWarningcontent_server_03_DocBase1
Creating job object for dm_LogPurgecontent_server_03_DocBase1
Successfully created job object for dm_LogPurgecontent_server_03_DocBase1
Creating job object for dm_ContentReplicationcontent_server_03_DocBase1
Successfully created job object for dm_ContentReplicationcontent_server_03_DocBase1
Creating job object for dm_DMCleancontent_server_03_DocBase1
The dm_DMClean job does not exist at the primary server so we will not create it at the remote site, either.
Failed to create job object for dm_DMCleancontent_server_03_DocBase1
[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_NO_MATCH]error:  "There was no match in the docbase for the qualification: dm_job where object_name = 'dm_DMClean' and lower(target_server) like lower('DocBase1.DocBase1@%')"


Exiting with return code (-1)
[dmadmin@content_server_03 logs]$
[dmadmin@content_server_03 logs]$

 

The RCS/CFS installation is failing because the creation of a remote job cannot complete successfully. It’s working properly for 3 out of the 5 remote jobs but not for the 2 remaining. Only one is shown in the log file because it didn’t even try to process the 2nd one since it failed already and therefore stopped the installation here. That’s why the start/stop scripts were there, the log folder was there, the dm_server_config was ok as well but there were some missing pieces actually.

The issue here is that the RCS/CFS installation isn’t able to find the r_object_id of the “dm_DMClean” job (it mention “Bad ID given: 0000000000000000”) and therefore it’s not able to create the remote job. The last message is actually more interesting: “There was no match in the docbase for the qualification: dm_job where object_name = ‘dm_DMClean’ and lower(target_server) like lower(‘DocBase1.DocBase1@%’)”.

The RCS/CFS installation is actually looking at the job with the name ‘dm_DMClean’, which is OK but it is also filtering only on the target_server which is equal to ‘docbase_name.server_config_name@…’ and here, it’s not finding any result.

 

So what happened? Like I was saying in the introduction, this environment was already installed several years ago in HA already. As a result of that, the jobs were already configured by us as we would expect them. Usually, we are configuring the jobs as follow (I’m only talking about the distributed jobs here):

Job Name on CS1 Job Status on CS1 Job Name on RCS% Job Status on RCS%
dm_ContentWarning Active dm_ContentWarning% Inactive
dm_LogPurge Active dm_LogPurge% Active
dm_DMClean Active dm_DMClean% Inactive
dm_DMFilescan Active dm_DMFilescan% Inactive
dm_ContentReplication Inactive dm_ContentReplication% Inactive

Based on this, we usually disable the dm_ContentReplication completely (if it’s not needed), we obviously leave the dm_LogPurge enabled (all of them) with the target_server set to the local CS it is supposed to run into (so 1 job per CS). Then for the 3 remaining jobs, it depends on the load of the environment. These jobs can be set to run on the CS1 by setting the target_server equal to ‘DocBase1.DocBase1@content_server_03.dbi-services.com’ or you can set them to run on ANY Content Server by setting an empty target_server (a single space: ‘ ‘). It doesn’t matter where they are running but it is important for these jobs to run and hence the setting to ANY available Content Server is better so it’s not bound to a single point of failure.

So the reason why the RCS/CFS installation failed is because we configured our jobs properly… Funny, right? As you could see in the logs, the dm_ContentWarning was created properly but that was because someone was doing some testing with this job and it was temporarily set to run on the CS1 only and therefore, when the installer checked it, it was a coincidence/luck that it could find it.

After the failure, there is normally not much done except creating the JMS config object, checking the ACS URLs and finally restarting the JMS but still, it is cleaner to just remove the RCS/CFS, clean the repository objects still remaining (the distributed jobs that were created) and then reinstalling the RCS/CFS after setting the jobs as the installer expects them to be…

 

Cet article Documentum – RCS/CFS installation failure est apparu en premier sur Blog dbi services.

Documentum – Remote Content Server Decommission

$
0
0

I already experienced the installation of a remote Content Server but it was my first decommissioning, and as usual the decommission is faster than the installation 😉 by the way, read this blog if you missed how to install a remote Docbase 🙂

First step is to delete all remote Docbases, then delete DocBroker, and at the end uninstall the Content Server. In my case, I have 4 Content Servers (ser, ser-0, ser-01, ser-1) and I would like to decommission the ser-1.

Now, let’s start.

1. Preparation

A. Stop Docbase(s) and DocBroker on ser-1

$DOCUMENTUM/dba/dm_shutdown_repo
$DOCUMENTUM/dba/dm_shutdown_repo1
$DOCUMENTUM/dba/dm_stop_DocBroker

Not mandatory because anyway the installer will stop it before deletion.

B. Review Projections on Client Applications
Review the projections of every client applications (e.g. DA, D2, D2-Config), to remove the reference to the uninstalled DocBroker. Keeping the old references will not cause any major issue, but the performance can be affected until the DocBroker is discarded.

C. Review Projections of the remaining CS nodes
Review the projections of the remaining CS nodes, to remove projections to the uninstalled DocBroker. Keeping the old projections will not cause any major issue, but CS will take longer to start, and some errors will appear in the logs. Don’t forget to review both server.ini and dm_server_config configuration object of each remaining node:

Projections in the server.ini file should be updated manually, the installer don’t remove a deleted target server.

cat $DOCUMENTUM/dba/config/repo/server.ini
...
[DOCBROKER_PROJECTION_TARGET]
host = ser
port = 1489
proximity=1
...

Projections in dm_server_config should be checked/updated manually for each docbase.

API> ?,c,select r_object_id, object_name from dm_server_config;
r_object_id			object_name
----------------	-----------
3d01e24080000102  	repo       
3d01e2408000b7b4  	ser-0_repo 
3d01e2408000c523  	ser-01_repo 
3d01e2408000c575  	ser-1_repo
(4 rows affected)

API> dump,c,3d01e2408000c523
...
USER ATTRIBUTES

  object_name                     : ser-01_repo
  ...
  projection_targets           [0]: ser-1
  projection_ports             [0]: 1489
  projection_proxval           [0]: 2
  projection_notes             [0]: Projecting to the fourth CS
  projection_enable            [0]: T
  ...

For all Docbases and all remaining nodes check if any is projected to the server to decommission, change the projection to another remaining node:

API> fetch,c,3d01e2408000c523
API> set,c,l,projection_targets[0]
SET> ser-0
...
OK
API> set,c,l,projection_ports[0]
SET> 1489
...
OK
API> set,c,l,projection_proxval[0]
SET> 2
...
OK
API> set,c,l,projection_notes[0]
SET> Projecting to the second CS
...
OK
API> set,c,l,projection_enable[0]
SET> T
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : ser-01_repo
  ...
  projection_targets           [0]: ser-0
  projection_ports             [0]: 1489
  projection_proxval           [0]: 2
  projection_notes             [0]: Projecting to the second CS
  projection_enable            [0]: T
  ...
API> save,c,l
...
OK
API> reinit,c
...
OK
API> exit
Bye

Repeate the steps for other Docbases, if any.

2. Delete Docbase(s) and DocBroker

To delete a Docbase launch the installer by executing below script:

$DOCUMENTUM/product/7.3/install/dm_launch_cfs_server_config_program.sh

Then check “Delete content-file server” and click “Next”

Select the repository from the list (delete the global repository at the end) then click “Next”

If a docbase is shown twice (or more), see this blog to know the root cause and the solution.

Put the password of dmadmin and click “Next”

Wait… As said before the installer will stop the docbase if it is started.

CFS deleted, click “Done”

Repeat the operation for other docbases if any.

To create the configuration file which can be used for silent deletion, execute below script with parameter:

$DOCUMENTUM/product/7.3/install/dm_launch_cfs_server_config_program.sh -r cfs_server_config.properties

To delete a DocBroker launch the installer by executing below script:

$DOCUMENTUM/product/7.3/install/dm_launch_server_config_program.sh

Check “Connection broker” and click “Next”

Check “Delete a connection broker” and click “Next”

Click on “Yes” to confirm

Check “Finish configuration” and click “Next”

Repeate the steps for other DocBrokers, if any.

3. Uninstall the Content Server

To uninstall the CS, execute the following:

$DOCUMENTUM/uninstall/server/Uninstall

Click “Next”

As mentioned in the screenshot the files and folders create after the installation will not be removed, you have to do it manually if needed.

Wait..

Click “Finish” when the Finish button appear.

4. POST Uninstall

A. Disable and delete jobs
Disable and delete all jobs built specifically to act over the uninstalled CS node (like dm_logPurge). To make it easy delete all jobs contain the SERVER_NAME.

Select impacted jobs:

API> ?,c,select object_name from dm_job where object_name like '%ser-1%';
object_name                                                                                                                                                                      
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dm_ContentWarningSER-1_repo                                                                                                                                         
dm_LogPurgeSER-1_repo                                                                                                                                            
dm_ContentReplicationSER-1_repo                                                                                                                                      
dm_DMCleanSER-1_repo                                                                                                                                                 
dm_DMFilescanSER-1_repo                                                                                                                                              
(5 rows affected)

Disable and delete them:

API> ?,c,update dm_job object set is_inactive=1 where object_name like '%SER-1%';
objects_updated
---------------
              5
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "5 objects were affected by your UPDATE statement."

API> ?,c,delete dm_job object where object_name like '%SER-1%';
objects_deleted
---------------
              5
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "5 objects were affected by your DELETE statement."

B. Review target of each job
Change the target server of jobs configured to run on the uninstalled CS node (if any). You can point them to one of the remaining nodes, or choose the “Any Running Server” option:

API> ?,c,select object_name from dm_job where target_server like '%SER-1%';
object_name                                                                                                                                                                      
----------------
(0 rows affected)

By the way, as you can see from the query above, no action needed if you configured your jobs correctly as explained in this blog.

C. Restart all the environment
If possible, restart all remaining Content Server and Client Applications also, in this way you are sure to start from a good basis.

Please don’t hesitate to ask or share your experience 😉

Cet article Documentum – Remote Content Server Decommission est apparu en premier sur Blog dbi services.

Documentum – Delete Remote Docbase – Strange behavior

$
0
0

In the previous blog I showed you how to decommission a remote Content Server by deleting remote Docbase(s) and Docbroker(s). In fact, when I tried to delete remote Docbases I encounter a strange behavior, and it wasn’t easy to find the root cause!

In fact, the suppression of the remote Docbase may be silent or not… In case you use the silent way you probably will not see the strange behavior 😉

1. Symptom

In the step of choosing the Docbase from the drop-down list, I saw each Docbase two times, like below:
Strange-behavior

Here, I cancelled the suppression to understand why, and to be sure that it is only a display issue to avoid side effects.

2. Analysis

So, to understand what’s going on:

I activated RPC and SQL traces via iAPI:

apply,c,NULL,SET_OPTIONS,OPTION,S,sqltrace,VALUE,B,T

apply,c,NULL,SET_OPTIONS,OPTION,S,rpctrace,VALUE,B,T 

I activated DFC traces, adding the following in the dfc.properties file:

dfc.tracing.enable=true

dfc.tracing.verbose=true

dfc.tracing.max_stack_depth=0

dfc.tracing.include_rpcs=true

dfc.tracing.mode=standard

dfc.tracing.include_session_id=true

dfc.tracing.timing_style=date

dfc.tracing.date_column_width=12

dfc.tracing.date_format=yyyy-MM-dd hh:mm:ss:SSS

dfc.tracing.log.level = DEBUG

dfc.tracing.print_exception_stack = true

dfc.tracing.file_prefix=emc.support

dfc.tracing.dir=/tmp

I started the installer again:

$DOCUMENTUM/product/7.3/install/dm_launch_cfs_server_config_program.sh 

Once I reached the screen and opened the list of repositories with the duplicated names, I disabled traces vi iAPI:

apply,c,NULL,SET_OPTIONS,OPTION,S,sqltrace,VALUE,B,T

apply,c,NULL,SET_OPTIONS,OPTION,S,rpctrace,VALUE,B,T 

Also, I disabled DFC traces in dfc.properties file:

dfc.tracing.enable=false

I checked the following log files:
– Repository log
– DFC Traces output

Nothing! really nothing related to the suppression… I suspected that the installer take the list of Docbases from a file somewhere on the server…

First track, I checked the server.ini file and its content :

[dmadmin@ser-1 ~]$ ls -rtl $DOCUMENTUM/dba/config/repo/*server*
-rw-rw-r--. 1 dmadmin dmadmin 1978 Dec 21  2017 /app/dctm/server/dba/config/repo/server.ini
[dmadmin@ser-1 ~]$ cat /app/dctm/server/dba/config/repo/server.ini
...
[SERVER_STARTUP]
docbase_id = 123456
docbase_name = repo
server_config_name = ser-1_repo
database_conn = DBrepo
database_owner = repo
...

Nothing could cause this issue 🙁

3. Solution

I decided to browse folders in $DOCUMENTUM, and what really jumped out at me is that I found two start scripts (backup of the original and updated one):

[dmadmin@ser-1 ~]$ ls -rtl $DOCUMENTUM/dba/dm_start*
-rwxrw-rw-. 1 dmadmin dmadmin  2656 Mar 24  2017 dm_start_repo1_repo1_bck_20180123-132143
-rwxrw-rw-. 1 dmadmin dmadmin  2656 Mar 27  2017 dm_start_repo_repo_bck_20180123-132143
-rwxrw-rw-. 1 dmadmin dmadmin  2658 Jul 23  2018 dm_start_repo_repo
-rwxrw-rw-. 1 dmadmin dmadmin  2658 Jul 23  2018 dm_start_repo1_repo1

I deleted the backup file, and checked again. YES, it was because of two start scripts in $DOCUMENTUM/dba!
Installer-correct

By the way, the start file has been updated to correct permission issue, see this blog for more information 😉

I made some tests to understand how the installer behaves:

I created files deleted above to get the same list again, and copied an existing dm_start:

cp $DOCUMENTUM/dba/dm_start_repo_repo $DOCUMENTUM/dba/dm_start_

A third “repo” appeared in the list.

I updated the just created file dm_start_ content:

vi $DOCUMENTUM/dba/dm_start_
...
DM_REPOSITORY_NAME=test1
DM_REPOSITORY_SERVICE_NAME=test1
...

Now, I have the below 🙂
Result after test

To sum up, the installer grep both below values in all dm_start_* and display all:

DM_REPOSITORY_NAME=repo
DM_REPOSITORY_SERVICE_NAME=repo

So, it is only a display issue because of a light way to select the Docbases, I think you agree that there is more efficient way to do it.
No issue if you select the remote Docbase and you delete it, but it will still be shown by the installer until you delete the backup file.

I hope that this blog will save your time 😉 Don’t hesitate to share other strange behaviors you encountered.

Cet article Documentum – Delete Remote Docbase – Strange behavior est apparu en premier sur Blog dbi services.

Documentum – FT – Document not found using search from D2

$
0
0

At a customer, I received an incident saying that on D2 a document is found by browsing and not found using normal search. The root cause seems to be easy: The Document isn’t indexed?! Not really, you will see it wasn’t easy to find 😉

1. Analysis

When we have an issue to find a document, usually the problem is that this document is not indexed or the user don’t have enough permissions.

I checked if the document is indexed, by doing a search from:
– DsearchAdmin, Document found:
doc found

– Content Server using idql, Document found also:

1> select r_object_id,object_name,r_modifier,r_modify_date from dm_document search document contains 'A_694';
2> go
r_object_id       object_name        r_modifier      r_modify_date
----------------  -----------------  --------------  --------------------
0901e240802ca812  A_694              dmadmin         3/30/2019 03:02:30

So, the Document is indexed and found correctly as I showed you in above both searches.
Let’s check permissions, despite that I know already that the user has permissions as he can browse and see the document.
Got the ACL name of the document:

API> dump,c,0901e240802ca812
...
USER ATTRIBUTES

  object_name                     : A_694
  title                           : This Document is related to my blog
...
  acl_domain                      : Doc
  acl_name                        : d2_2350e171_213b12de
...

Got ACL Object ID:

1> select r_object_id,description from dm_acl where object_name='d2_2350e171_213b12de';
2> go
r_object_id       description         
----------------  ------------------- 
4501e24080028cce  1 - BLOG - Documents
(1 row affected)

Check permissions:

API> dump,c,4501e24080028cce
...
USER ATTRIBUTES

  object_name                     : d2_2350e171_213b12de
  description                     : 1 - BLOG - Documents
...

SYSTEM ATTRIBUTES

  r_is_internal                   : F
  r_accessor_name              [0]: dm_world
                               [1]: dm_owner
                               [2]: GROUP_BLOG_TEST1
                               [3]: GROUP_BLOG_TEST2
  r_accessor_permit            [0]: 1
                               [1]: 7
                               [2]: 3
                               [3]: 6
  r_accessor_xpermit           [0]: 3
                               [1]: 3
                               [2]: 3
                               [3]: 3
  r_is_group                   [0]: F
                               [1]: F
                               [2]: T
                               [3]: T
...

The impacted user is member of GROUP_BLOG_TEST1, you can check using DA for example, browse:
Administration -> User Management -> Users, then find the impacted user, right click and chose “View Current User Memberships”.

So, the document is indexed and the user has correct permission…

2. Solution

In fact, when a user search using keyword, Documentum will make the search on Documents indexed to find matched documents, but that’s not all… ACL are got from Search also and applied to documents found to give permission to the user accordingly, that’s mean that ACL should be also indexed!

Check ACL index status in the DsearchAdmin:

4501e24080028cce  d2_2350e171_213b12de 

The ACL is not found :
ACL NOT FOUND

Submit the indexing to the queue, using api:

queue,c,4501e24080028cce,dm_FT_i_user

Once indexed, I asked the user to search the document again on D2, and he could find it. So, yes the ACL need also to be indexed if not the document will not be found even if it is indexed.

Cet article Documentum – FT – Document not found using search from D2 est apparu en premier sur Blog dbi services.

Documentum – DOCUMENTUM_SHARED is dead?

$
0
0

In June last year, I did my first manual installation (so without docker) of Documentum 16.4 and I was testing it with PostgreSQL. I quickly realized that there were some changes in Documentum and, unfortunately, I don’t believe that it’s for the best! In this blog, I will talk about the DOCUMENTUM_SHARED environment variable. I tested that almost a year ago with a PostgreSQL binary but it’s the same for all Documentum 16.4 binaries. This isn’t a very technical blog, it’s more like a small reflection about what OpenText is currently doing.

 

I. DOCUMENTUM_SHARED is dead

 

In Documentum 7.3 or below, you could define an environment variable named DOCUMENTUM_SHARED before installing Documentum (see this blog for example) which would then be used to define where the Shared components of a Content Server should be installed. This include mainly the following:

  • The DFC properties and libraries
  • The Java Method Server (JBoss/WildFly + all Applications)
  • The Java binaries

Starting with Documentum 16.4, this environment variable has been deprecated (see KB11002330) and Documentum will simply ignore it. So, you will end-up with all the above components being installed right under $DOCUMENTUM, with everything else. I don’t like that because on Linux, we are used to split things and therefore, we are used to have only a few folders under $DOCUMENTUM and a few others under $DOCUMENTUM_SHARED. Now everything is put under $DOCUMENTUM and even the DFC files/folders. By default in your 16.4 dfc.properties, you will have a definition of “dfc.data.dir=…” which points to $DOCUMENTUM as well ($DOCUMENTUM_SHARED before) so you will end-up with a lot of ugly stuff right under $DOCUMENTUM and it becomes messy! These are the DFC files/folder I’m talking about:

  • $DOCUMENTUM/apptoken/
  • $DOCUMENTUM/cache/
  • $DOCUMENTUM/checkout/
  • $DOCUMENTUM/export/
  • $DOCUMENTUM/identityInterprocessMutex.lock
  • $DOCUMENTUM/local/
  • $DOCUMENTUM/logs/

Obviously you can change the definition of the “dfc.data.dir” so this will be put elsewhere and you should really do that for all dfc.properties file but that’s kind of surprising. When I’m doing a review of an environment or an assessment of some sort, the first thing I’m always doing is going to the $DOCUMENTUM folder and listing its content. If this folder is clean (no log file, no backup, no temp files, no cache files, aso…), then there is a good chance that the complete installation is more or less clean as well. If there is a lot of mess even on the $DOCUMENTUM folder, then I know that it’ll be a long day.

 

II. long live DOCUMENTUM_SHARED! (for now)

 

So why am I saying that? Well as always when you try to deprecate something, there are leftovers here and there and it’s pretty hard to change people’s mind… Take for example the “docbase” VS “repository”… Since Documentum 7.0, a “docbase” is now officially called a “repository” but yet, a lot of people still uses “docbase” and even Documentum does (there are a lot of remaining references everywhere). I believe it will be the same for DOCUMENTUM_SHARED.

At the moment in Documentum 16.4, there are the following references to DOCUMENTUM_SHARED:

  • D2 16.4 still uses DOCUMENTUM_SHARED to know where the components are installed. This is used to deploy D2 libraries into the JMS mainly. I didn’t check but I guess it will be the same for the BPM/xCP
  • MigrationUtil (change docbase ID, docbase name, server config name, aso…) still uses DOCUMENTUM_SHARED to know where the dfc.properties is, where the JMS is, aso…
  • dm_set_server_env scripts still uses DOCUMENTUM_SHARED for defining other variables like LD_LIBRARY_PATH or CLASSPATH

Because of these remaining references (and probably much more), OpenText didn’t just remove completely the DOCUMENTUM_SHARED variable… No, it’s still there but they put it, with a hardcoded value (same as $DOCUMENTUM), directly into the dm_set_server_env scripts so other references are still working properly.

OpenText just probably didn’t want to completely remove the environment variable directly so they are proceeding step by step. First ignoring it and they will probably remove it completely in a future major version. Until then, I will continue to define my DOCUMENTUM_SHARED environment variable but for Documentum 16.4, I will set it with the same value as DOCUMENTUM because we never know, maybe in the next version, the variable will come back… ;).

 

Cet article Documentum – DOCUMENTUM_SHARED is dead? est apparu en premier sur Blog dbi services.

Viewing all 167 articles
Browse latest View live