Общий объект с MPI - PullRequest
       23

Общий объект с MPI

0 голосов
/ 23 апреля 2020

Можно ли написать разделяемую библиотеку, используя MPI, а затем как-нибудь открыть ее из Python (например, с помощью ctypes)? Я сделал ПО C. У меня есть простая функция

extern "C" {
  void bandb()
 {
 int mpiProcessId;

MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &mpiProcessId);

if (SUPERVISORY_PROCESS == mpiProcessId) {
    performSupervisoryProcessOperations(&problemInstance, determineLowerEstimation);
} else {
    // pass options flags (). Create object and pass it here.
    performWorkerProcessOperations(determineLowerEstimation);
}

MPI_Finalize();
}

, и я пытаюсь открыть эту библиотеку из Python кода.

К сожалению, я получил сообщение в консоли:

[WS-6KDQ042:11413] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap: /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap.so: undefined symbol: opal_show_help (ignored)
[WS-6KDQ042:11413] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_posix: /usr/lib/openmpi/lib/openmpi/mca_shmem_posix.so: undefined symbol: opal_shmem_base_framework (ignored)
[WS-6KDQ042:11413] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv: /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv.so: undefined symbol: opal_show_help (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  --> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_init failed
  --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[WS-6KDQ042:11413] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!

Обычно я должен запускать процесс mpi с использованием mpirun.

Цель состоит в том, чтобы связать мой алгоритм, написанный на C ++ и MPI, с Python. У вас есть идеи, как я могу просто решить эту проблему?

...