PETSc: PCMPI介绍和使用
【代码】PETSc: PCMPI介绍和使用。
PCMPI
The MPI Linear Solver Server mode in PETSc is primarily a runtime option that allows a user’s application code to run on only one MPI process, while the other processes act as a “server” to perform the actual parallel linear algebra computations. This is particularly useful for applications with a hybrid MPI/OpenMP model or where the main application is largely sequential, but needs access to PETSc’s parallel solvers.
https://petsc.org/release/manualpages/PC/PCMPI/
https://petsc.org/release/src/ksp/ksp/tutorials/ex1.c.html
示例代码
static char help[] = "Solves a linear system with KSP.\n";
#include <petscksp.h>
int main(int argc, char **args)
{
Vec x, b; /* approx solution, RHS */
Mat A; /* linear system matrix */
KSP ksp; /* linear solver context */
PetscInt n = 10;
PetscErrorCode ierr;
ierr = PetscInitialize(&argc, &args, (char *)0, help);CHKERRQ(ierr);
/* The user code (matrix/vector creation, KSP setup, KSPSolve) runs here */
/* These calls are made on PETSC_COMM_WORLD which, in server mode, is COMM_SELF on rank 0 */
ierr = VecCreate(PETSC_COMM_WORLD, &b); CHKERRQ(ierr);
ierr = VecSetSizes(b, PETSC_DECIDE, n); CHKERRQ(ierr);
ierr = VecSetFromOptions(b); CHKERRQ(ierr);
ierr = VecDuplicate(b, &x); CHKERRQ(ierr);
ierr = MatCreate(PETSC_COMM_WORLD, &A); CHKERRQ(ierr);
ierr = MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, n, n); CHKERRQ(ierr);
ierr = MatSetFromOptions(A); CHKERRQ(ierr);
ierr = MatSetUp(A); CHKERRQ(ierr);
/* ... fill A, b ... */
ierr = KSPCreate(PETSC_COMM_WORLD, &ksp); CHKERRQ(ierr);
ierr = KSPSetOperators(ksp, A, A); CHKERRQ(ierr);
ierr = KSPSetFromOptions(ksp); CHKERRQ(ierr);
ierr = KSPSolve(ksp, b, x); CHKERRQ(ierr);
/* ... clean up ... */
ierr = KSPDestroy(&ksp); CHKERRQ(ierr);
ierr = VecDestroy(&b); CHKERRQ(ierr);
ierr = VecDestroy(&x); CHKERRQ(ierr);
ierr = MatDestroy(&A); CHKERRQ(ierr);
ierr = PetscFinalize();
return 0;
}
运行
mpiexec -n 4 ./ex1 -mpi_linear_solver_server -ksp_type gmres -pc_type ilu
What happens:
- MPI Initialization: Four MPI processes are started.
- Server Activation: The -mpi_linear_solver_server option is detected.
- User Code (Rank 0): Only MPI rank 0 proceeds through the main function to KSPCreate(), KSPSetOperators(), and KSPSolve(). Inside the KSPSolve() call, the application’s PETSC_COMM_WORLD is internally treated as PETSC_COMM_SELF (a communicator with only one process).
- Solver Server (Ranks 1-3): Ranks 1, 2, and 3 enter a waiting state, ready to receive the linear system data (matrix and vector) from rank 0, solve it in parallel using the requested options (e.g., gmres with ilu preconditioning), and return the result to rank 0.
- Data Transfer: Data is automatically distributed to the server ranks, by default using shared memory if available.
- Cleanup: When rank 0 calls PetscFinalize(), all server ranks are shut down correctly.
Key Options
Additional runtime options can be used to control the server’s behavior. These include viewing information about solved systems with -mpi_linear_solver_server_view and controlling the use of shared memory for data distribution with -mpi_linear_solver_server_use_shared_memory <true, false>. Shared memory is used by default
更多推荐



所有评论(0)