Opened 4 years ago

Last modified 4 years ago

#1568 new enhancement

Reduce overhead of Formaline

Reported by: Ian Hinder Owned by:
Priority: major Milestone:
Component: Cactus Version: development version
Keywords: Cc:


The Formaline thorn, used for storing important information about a simulation in the simulation output, currently has some performance and size overheads which discourage people from using it. These should be reduced or mitigated to encourage people to use this important thorn.

This ticket is based on discussion in #1565.

Problems with Formaline:

  • During compilation, Formaline stores the built source tree in a cactusjar.git repository. My impression is that this takes a lot of time on slow filesystems.
  • During compilation, Formaline stores the whole source tree as tarballs in the executable. My impression is that this is also slow.
  • The source tarballs output in the simulation output directory are often many times larger than the rest of the simulation, and cause a lot of overhead when transferring simulations for analysis. This might be improved by a better sync tool (simfactory 3?) which identified that tarballs were identical to those which have already been transferred, and skipped the transfer. I would like that the source tarballs are identical if the contents are identical, even if Cactus has been rebuilt. I think I observed that this was not the case (maybe a build ID is included?).
  • As an aside: there is a message at link time that Formaline has finished doing something, but this is misleading; Formaline has other tasks which it does after this message is displayed; the wait for the final executable is not due solely to linking, but also to some archiving task of formaline.

Attachments (0)

Change History (10)

comment:1 Changed 4 years ago by Barry Wardell

What about providing the option to use a more lightweight representation of the source tree? For example, the relevant svn revision or git sha1, plus a diff containing any local changes. Such a representation would also make it convenient to share your exact source tree with others, as long as there is a straightforward way to reconstruct the source tree from the information.

This would rely on the git/svn repositories remaining available so it may not be desirable by everyone, but it is something I would likely use.

comment:2 Changed 4 years ago by Ian Hinder

Yes, I have thought about this extensively. For production simulations that end up in papers, I think you probably want the tarballs as well, just for safety. But in general use, in the case that the repositories are still available and haven't been rewritten, having the version control information is much more useful, as you can easily see which changes are present in one simulation, and you can refer to them by commit message and author, rather than by source code diff. The main problem with this is that repository information is not usually synced to the machine where the tree is built. For my own workflow, where I never modify source on the remote machine, it would be sufficient to collect the "manifest" of the source tree (i.e. repository/commit/diff info) locally and sync it across when source tree changes are synced. It could then be included in the simulation executable and output by formaline.

comment:3 Changed 4 years ago by Erik Schnetter

That may be a nice idea. However, I would be careful especially with git repositories where commits or branches may exist only locally.

I don't think that storing the source code should be as expensive as it is. If we exclude the ExternalLibraries tarballs, then each source file is already read by the compiler, and converted to an object file. The executable (without tarballs) is larger than the tarballs. I don't understand why generating them is so expensive. My current assumption is that this process is only insufficiently parallelized by make, and that e.g. creating the tarballs requires a lot of disk I/O with little CPU action and happens all at once. Spreading this out over a longer time, and having tar read the source files near the time when the compiler already reads them, should improve performance considerably.

We may also further improve performance by not storing the temporary tarballs on disk before using perl to convert them to a C source file; this could happen in one step, and the result could be fed directly to the compiler. This would require writing a small C program to do so, based on zlib for compression.

Regarding the git repositories: There is currently one git repo per executable, which is a serialization choking point. Using one git repo per thorn would be much faster (since parallel), and combining these (via branches? merging? simple sequential commits?) may be faster than a single, large commit.

comment:4 Changed 4 years ago by Roland Haas

You can actually generate the tarfiles etc on the fly whithout having to write C code:

tar c foo bar | od -to1 -v | awk '
    print "const unsigned char *SpEC_tarball_data = "
    printf "\"";
    for(i = 2 ; i <= NF ; i++) {
      printf "\\%s",$$i; sz++
    print "\""
  END {
    print ";\nint SpEC_tarball_size = ",sz,";"
}' ) | gcc -O0 -o SpEC-tarball.o -xc -c -

does the trick without any intermediate files. This was an attempt to include tarballs in SpEC which so far is stalled due to it being hard to decide which files to include.
icc can also be made to use pipes though I think we can safely rely on gcc being around. With this the C code in Formaline sees a char* pointer SpEC_tarball_data pointing to the tarball data and SpEC_tarball_size lists its size so a simple fwrite(SpEC_tarball_data, SpEC_tarball_size, 1, stdout) can write the tarball. This only uses POSIX tools I think so should work on all machines.

comment:5 Changed 4 years ago by Frank Löffler

For personal use, I would probably always use the "tarball" option. Local changes, local unpushed commits and whatnot would be not easy to include, and I don't usually see the tarball creation as too slow - my personal experience. What I do see is that Formaline includes some build-id into the executable, which forces a new link every time I rebuild Cactus, even if nothing else changed. If nothing else changes I don't see why Formaline should use a new build-id, and force a new linkage stage, and the executable updated.

Another issue I have (had) was that (gnu) tar complains if files changed while processing them. A change in atime also counts as change, and this can happen when things are compiled in parallel. We don't care about the atime, so this isn't a problem, but these messages are annoying regardless. I am not aware of an option to tar to prevent this. The only way I currently see to avoid this entirely would be to make a copy before using tar, but that would be unacceptable performance-wise. On the other hand, I currently don't see this myself anymore on my development machines because I mounted my local file systems with "relatime" - access time changes are "almost ignored" on a file system level.

Besides these two points (of which really only the first bothers me), I am currently quite happy with Formaline. It has "issues" from time to time (like every code), and usually they are fixed quite quickly.

Roland: your code would put all data into one source code to be compiled. I believe this fails on several machines because that gets too large, which is why we have to split up some of the larger thorns into several 'source C files' to be compiled and combined on a C-level.

comment:6 Changed 4 years ago by Erik Schnetter

If the tarball is too large, then one has to create multiple C files from it. Very unfortunately, most compilers are really bad at compiling large static arrays, in the sense that they require large amounts of memory and take a long time to compile such as file. Apart from that, Formaline uses Perl instead of awk.

comment:7 Changed 4 years ago by Roland Haas

Sigh. Another comment lost due to the captcha system badly interacting with the browser.

I wanted to ask if we know more details on what makes large files fail to compile. Is it the linker, the compiler, or the OS limiting memory consumption?

I played around with the best way of compiling and found that yes indeed arrays are bad and therefore use a long string with all characters encoded in octal notation. I am also using only gcc so that I have a compiler that is the same everywhere. Perl is also fine with me its not in POSIX though so I did not want to use it in SpEC. Really the intention is only to show that one can pipe the source into gcc.

comment:8 Changed 4 years ago by Erik Schnetter

The problem is purely the compiler. Yes, using gcc should work, except (of course) on Blue Gene or Intel MIC or other systems that essentially cross-compile.

I was thinking of using the Bash syntax "<(some perl command here)" to pipe the output into the compiler.

comment:9 Changed 4 years ago by Roland Haas

Ah, I had not thought of the MIC and Blue Genes. The trick of compiling stdin also works with intel compilers (tried this for different reasons) but i have no idea how fast they are or if PGI has similar issues. This all only really matters if the thing that makes formaline slow is the number of files it creates on disk.

comment:10 in reply to:  8 Changed 4 years ago by Barry Wardell

Replying to eschnett:

The problem is purely the compiler. Yes, using gcc should work, except (of course) on Blue Gene or Intel MIC or other systems that essentially cross-compile.

I have noticed that GCC is particularly bad with large files. I have had much better success (sometimes by orders of magnitude) with using LLVM-based compilers such as clang, but then they are probably not so widely available yet.

Modify Ticket

Change Properties
Set your email in Preferences
as new The ticket will remain with no owner.
Next status will be 'review'.
as The resolution will be set.
to The owner will be changed from (none) to the specified user.
Next status will be 'confirmed'.
The owner will be changed from (none) to anonymous.

Add Comment

E-mail address and name can be saved in the Preferences.

Note: See TracTickets for help on using tickets.