ZFS recursiv snapshot rollback

I’m using zfs snapshots to save all non global zones filesystems via cron. This can be achieved using the zfs snapshot option “-r”. During snapshot creation this means

create snapshots on the target filesystem and all descendants filesystems

If made a mistake and you want to rollback the last action you will be disappointed that the corresponding “-r” option of the zfs rollback command has a different meaning:

destroy snapshots newer than the specified snapshot

All posts in the internet that I found show that there is no way to make a recursiv rollback with just using zfs. A small shell script can help to make recursiv rollbacks:

zfs-recursive-rollback

Of course you have to make the file executable after copying it to your system with a command like

You should also adapt the command line options in the script to your needs if the options in the script do not fit.

Foiltex incompatibility with hyperref

I had problems to compile older documents that were working in the past without problems.

Some googling gave me the hints that hyperref has changed a bit and other dependent packages have to be updated therefore. In my case the foiltex package was not up to date.

I use Miktex as my latex distribution which offers a comfortable update manager. It’s a pity that foiltex is not included in the package repository of miktex.

I downloaded the foiltex zip package from CTAN.

After unzipping in an arbitrary directory you have to execute the following latex command

This delivers all single files in the latex package. I created a directory manually to put all the foiltex files in:

To update the package database of miktex you finalize your installation with the command

Happy foils creation!

Google trends analysis of ESB products

For enterprises with many interfaces it is very helpful to make use of ESB (enterprise servicebus) systems. A quick analysis with google trends for the most used ESB systems leads to the following diagramm:

Apache Camel is integrated in Apache Servicemix which is based on OSGi so that hot deployment and concurrent use of different component versions at the same time is no problem.

Usually such systems make extensive use of MOM (message oriented middleware) to be able to get a loose coupling between the processing steps (especially input and output). As a consequence such systems generate a huge number of threads in the system. In SMP or CMP systems it is important to check that the JVM is not just using on CPU/core at a time to get best performance results.

As it is described very nicely in the blog Parallel Processing and Multi-Core Utilization with Java it is impotant to not just use

new Thread( )

but instead make use of the methods of the standard jdk class

java.util.concurrent.Executors

especially of its method

newFixedThreadPool(int nThreads)

In Apache Servicemix / Camel there are possiblities to make use of all CPU cores in several ways:

  • Thread pool profiles (see also: Camel Threading Model)
  • Apache Servicemix Clustering (in this case the ESB can even be spread over several different servers)

 

I replaced for a company SAP XI by Apache Servicemix with the effect of several advantages:

  • Even the core of the ESB is open source so that we had a chance to find and correct bugs in the core systems by our own.
  • Apache Servicemix / Camel allows to create interface OSGi bundles in a very flexible and lightweight way which leads to a decrease of the development efforts
  • Because Apache Servicemix is using the newest java version (which is not the case in all SAP XI versions – some still use JDK 1.4) it is possible to make use of the huge amount of available java libraries and packages available on the market (many are free)
  • Many of the Enterprise Integration Patterns described in the book with the same title are already ready to use as features in Camel.

Because such system make use of message queue systems the availability of such systems is very important. Really satisfying HA solutions for JMS message queue systems are not that easy to set up. To get at least rid of the problem to loose messages it is quite easy to set up several stand alone MQ Servers and configure the client with a HA URL so that in case of a problem with one server the client will deliver the message to another mq server.

If someone has a good solution for the synchronization problem between the mq servers please let me know.

martin[at]dres-menzel[DOT]de

 

Issue using CXF soap to send idoc xml to SAP ERP

We created our own function module which just takes an arbitrary idoc in xml format as input parameter. The function module maps the idoc xml to the corresponding IDOC structure in SAP and afterwards calls standard function modules to send the IDOC to core SAP.

In transaction we had to set “trigger immediately” because of business restrictions.

When we are sending IDOC messages with the below code (fragment) we could detect different results for the status of the IDOC in WE02 when varying the timeout values.

If the timeouts are big enough the idoc gets status IDOC processed was wanted.

Decreasing the timeouts leads first to an IDOC with status 64 (Doc ready to be transferred to application) and it will never be processed without other actions.

Further decreasing the timeout leads to a situation where the IDOC could not at all be saved in SAP.

In any case the timeout exception looks like

 

With the special code above it is possible to set the timeout values to your needs so that the IDOCs are processed like wanted in SAP.

Attention: In the case of a timeout exception you can not simply resent the IDOC because of the above described scenarios. Before sending the IDOC again further checks have to done:

For example:

1) Check the table EDIDC

2) Get the content of the last IDOCs send to the system (info from 1) and check the content using a remote enabled function module with the following code

ZFS now also available for Linux

I just tested the debian package from the website

http://zfsonlinux.org/debian.html

Usually I use ZFS in Solaris. The first step on a debian box where straight forward and without any problems. After the package installation I was able to create a zpool in the same way as I usually did on Solaris.

In this way all advantages like comfortable snapshots including differential snapshots, usefull for backup purposes are now also available for Debian users.

Great Job!

grep like seach in windows

 

CRM order status informations

The orders header table crmd_orderadmh has a unique guid (header guid) for each order. With this header guid the status informations of the order can be retrieved from table crm_jest.

The status IDs can be resolved to texts using different text tables like TJ30T and TJ02T depending on the order status type.