Opened 3 years ago

Last modified 3 years ago

#1831 reopened enhancement

Enable MPI in FFTW3

Reported by: Erik Schnetter Owned by:
Priority: optional Milestone:
Component: EinsteinToolkit thorn Version: development version
Keywords: Cc:

Description (last modified by Roland Haas)

If Cactus is built with MPI, enable the MPI bindings in FFTW3.

Attachments (1)

fftw3.diff (2.6 KB) - added by Erik Schnetter 3 years ago.

Download all attachments as: .zip

Change History (8)

Changed 3 years ago by Erik Schnetter

Attachment: fftw3.diff added

comment:1 Changed 3 years ago by Erik Schnetter

Status: newreview

comment:2 Changed 3 years ago by Roland Haas

Description: modified (diff)
Status: reviewreopened

Looks ok to me. Seems to work (from looking at this and my workstation and datura) for configurations using MKL for fftw3, debian and self-buiding.

Should one add MPI_LIBS to FFTW3_LIBS? HDF5 which can also optionally use MPI does so (added by me at one point). This is required for utilities I think that want to link against FFTW3 and which don't benefit from the thorn dependency tracking but can only use FFTW3_LIBS.

Last edited 3 years ago by Roland Haas (previous) (diff)

comment:3 Changed 3 years ago by Roland Haas

Description: modified (diff)

comment:4 Changed 3 years ago by Frank Löffler

Assuming the thorn detects an external fftw library, that was not build with mpi - what should happen then? It seems that with the current patch libfftw3_mpi(.so) would be added as library without checking that a) this is needed (it was not so far) and b) it actually exists.

In fact, I do get the expected linker error with the patch: cannot find -lfftw3_mpi

mpi bindings for fftw3 might need to be installed separately, e.g., I have libfftw3-dev [1] installed, but not libfftw3-mpi-dev [2].


We would need to warn users if we would the ET require MPI bindings for fftw (which we would if we assume them to be there if MPI is used).

Another issue might be that the installed MPI bindings for fftw might not be linked against the version of MPI that Cactus is configured to use. We would have to check for that. libfftw3-mpi on my system seems to use openmpi, which happens to be the MPI I also use for Cactus. That fftw3-mpi package for Ubuntu also uses openmpi, while the simfactory configuration for ubuntu advises users to install mpich2 (but I checked that using openmpi would also work).

We could of course only add the MPI bindings for the case when FFTW is build by Cactus, but then thorns shouldn't rely on them being present, which would defeat the purpose I would guess.

All these complications make me wonder if this change is worth the trouble. They might, but what would they be used for?

comment:5 Changed 3 years ago by Erik Schnetter

For simplicity I would go the same route that we are taking for HDF5:

  • enable using MPI features if they are there
  • if we build FFTW3 ourselves and we have MPI, enable MPI features (I think we don't do this for HDF5, but we should)
  • allow linking against a non-MPI FFTW3 system install for simplicity, which will break if a thorn needs a parallel FFT

Yes, we should add the MPI libraries explicitly to the FFTW3 libraries, as Roland describes.

If FFTW3 uses the wrong MPI version -- that's tough, difficult to detect, such inconsistencies can exist for all libraries that Cactus uses (not just for MPI); the way out is to require users to remedy this in their option list.

comment:6 Changed 3 years ago by Erik Schnetter

This change allows parallel FFTs. Without this, only process-local FFTs are possible, which is limiting. I'd assume that most times you want an FFT, you'll have a uniform grid, and if you are running on multiple processes you will need the parallel version.

comment:7 Changed 3 years ago by Roland Haas

We currently have thorns in the ET that use FFTW3 (PITTNullCode/SphericalHarmonicRecon). So whatever we set up should not break them. My experience with codes uses FFTW (SphericalHarmonicRecon, SpEC) was that they did process local FFT's of fairly small size and not a large multi-process FFT, so I would not force all FFT implemenations to offer fftw_mpi to be able to support current use to the thorn plus future use on large arrays. Possibly a switch like HDF5_ENABLE_CXX named FFTW3_ENABLE_MPI would make sense? If this is set then we either build fftw3 with MPI enabled or add the MPI libs to its linking libraries?

Modify Ticket

Change Properties
Set your email in Preferences
as reopened The ticket will remain with no owner.
Next status will be 'review'.
as The resolution will be set.
to The owner will be changed from (none) to the specified user.
The owner will be changed from (none) to anonymous.

Add Comment

E-mail address and name can be saved in the Preferences.

Note: See TracTickets for help on using tickets.