Cactus fails to configure on CCT cactus-test VM

Issue #1912 closed
Ian Hinder created an issue

I have just fixed some problems with the CCT cactus-test VM, but I cannot get Cactus to build. It fails during configuration with

checking host system type... x86_64-unknown-linux-gnu
checking whether make sets ${MAKE}... yes
checking whether the C compiler (gcc -g3 -march=native -std=gnu99 -rdynamic) works... no
configure: error: installation or configuration problem: C compiler cannot create executables (see configs/<configname>/config-data/config.log for details).

In config.log, there is

configure:1015: checking whether the C compiler (gcc -g3 -march=native -std=gnu99 -rdynamic) works
configure:1032: gcc -o conftest -g3 -march=native -std=gnu99 -DMPICH_IGNORE_CXX_SEEK -rdynamic conftest.c  1>&5
conftest.c:1:0: error: CPU you selected does not support x86-64 instruction set

I don't understand what is going on here. 1. Why should gcc insist that the CPU (which is 'native') support 64 bit instructions? 2. Why does this fail when it used to work, the last time this VM was up and running? Did the VM get moved to another host?

/proc/cpuinfo says

model name  : QEMU Virtual CPU version (cpu64-rhel6)

This was a clean build, with the configs directory wiped before building. It uses the simfactory ubuntu.cfg optionlist. The VM is a standard ET build slave running "Ubuntu 12.04.5 LTS".

Keyword:

Comments (7)

  1. Frank Löffler
    • changed status to open
    • assigned issue to
    • removed comment

    I've just chatted with Steve (who is in-town - I'm not). I'll have to leave it to him to contact IT support directly to figure this out quickly.

  2. Ian Hinder reporter
    • removed comment

    There is mention of this here: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61570. I tried compiling a simple hello world program, and if you compile with -march=native (as in ubuntu.cfg), the compiler aborts saying that your CPU doesn't support 64 bit instructions. Maybe a newer version of gcc has this fixed or a workaround, though it sounds like a qemu/KVM problem to me.

  3. Steven R. Brandt
    • removed comment

    I've talked to IT support and they're looking into it. It is possible the VM was moved to new hardware recently because of memory problems on the older servers, and the cause may be something to do with that. On a side note, the OS could probably stand to be upgraded.

  4. Ian Hinder reporter
    • removed comment

    It is important that all the build VMs are the same. Yes, we should update to Ubuntu 16.04, but this needs to be done in a controlled way across all the VMs. We also should check that Cactus works properly on 16.04 (until very recently, it would have failed due to the PETSc problem).

    I would like to move to docker containers for the Jenkins slaves eventually, so that we are not so dependent on VMs being built in a specific way, and they don't have as much internal state. This may be done by installing Docker on the existing slaves and setting up containers there.

  5. Barry Wardell
    • removed comment

    Replying to [comment:4 hinder]:

    It is important that all the build VMs are the same. Yes, we should update to Ubuntu 16.04, but this needs to be done in a controlled way across all the VMs. We also should check that Cactus works properly on 16.04 (until very recently, it would have failed due to the PETSc problem).

    I have run the ET tests on 16.04 (specifically on tesla, which is a machine known to simfactory) and they all pass apart from the failures related to using -ffast-math. (I didn't encounter the PETSc problem as it it disabled in the ET thornlist.)

  6. Log in to comment