Skip to content

BDDC Example#728

Open
jeremylt wants to merge 1 commit intomainfrom
jeremy/bddc
Open

BDDC Example#728
jeremylt wants to merge 1 commit intomainfrom
jeremy/bddc

Conversation

@jeremylt
Copy link
Member

@jeremylt jeremylt commented Apr 6, 2021

Nothing much to look at here yet. I'm just putting this here to make it easier to see/comment on what I'm doing.

@jeremylt jeremylt self-assigned this Apr 6, 2021
@jeremylt jeremylt force-pushed the jeremy/bddc branch 7 times, most recently from cf2dd0e to 68502ef Compare April 14, 2021 18:32
@jeremylt jeremylt force-pushed the jeremy/bddc branch 5 times, most recently from 3c49d53 to 07eaff5 Compare April 16, 2021 20:23
@jeremylt
Copy link
Member Author

There is some debugging to do after #744 merges, but this should be close. The open task is prolonging into the broken space - We don't really have a way to go from 1 qfunction output into a pair of target vectors

@jeremylt jeremylt force-pushed the jeremy/bddc branch 2 times, most recently from c2b2f48 to 53349b4 Compare April 18, 2021 22:32
@jeremylt
Copy link
Member Author

I have not started debugging yet, but all of the pieces I want are there. Here's to hoping

@jeremylt jeremylt force-pushed the jeremy/bddc branch 6 times, most recently from 10d7e27 to c250336 Compare April 19, 2021 16:00
@jeremylt
Copy link
Member Author

Huzzah - it compiles. Now to see what else is broken about it.

@jeremylt jeremylt force-pushed the jeremy/bddc branch 3 times, most recently from b82348d to bc61d0a Compare April 19, 2021 18:59
@jeremylt
Copy link
Member Author

The number of iterations grows less rapidly when I increase the number of cells though, so that's a win

@jeremylt jeremylt force-pushed the jeremy/bddc branch 2 times, most recently from b5a9c23 to 1168263 Compare May 24, 2021 15:03
@jeremylt jeremylt force-pushed the jeremy/bddc branch 5 times, most recently from 6f826b0 to 34e965e Compare June 28, 2021 16:23
@stefanozampini
Copy link
Contributor

@jeremylt @jedbrown I did not know you got a working BDDC code. I was wondering if the Arrinv solve can be hooked up in PCBDDC as subdomain solver

@jedbrown
Copy link
Member

jedbrown commented Oct 1, 2021

Are you wanting to use PCBDDC with one subdomain per process and put libCEED's Arrinv as subdomain solver? Or would you make PCBDDC work with many subdomains with hooks so that libCEED is responsible for (batched) matrix-free operators?

I would really like to use adaptive coarse basis construction in our framework, still with separable element solves.

@stefanozampini
Copy link
Contributor

The idea would be to reuse PCBDDC code, and hook up your specialized solvers for interior and Arr. Adaptive coarse spaces can be built provided we have the explicit Schur complement. I have an old branch where I started supporting multiple subdomains per process https://gitlab.com/petsc/petsc/-/tree/stefanozampini/bddc-ceed. I'm also interested in this

@jeremylt
Copy link
Member Author

jeremylt commented Oct 1, 2021

Note: This code mostly works. There is some small bug that is killing our convergence that I haven't had space time to chase down.

@stefanozampini
Copy link
Contributor

Note: This code mostly works. There is some small bug that is killing our convergence that I haven't had space time to chase down.

Getting BDDC to work can be painful, I know that :-)

@jeremylt
Copy link
Member Author

Rebased for changes on main. Same slow convergence, but it does converge.

-- CEED Benchmark Problem 3 -- libCEED + PETSc + BDDC --
  PETSc:
    PETSc Vec Type                     : seq
  libCEED:
    libCEED Backend                    : /cpu/self/xsmm/blocked
    libCEED Backend MemType            : host
  Mesh:
    Number of 1D Basis Nodes (p)       : 3
    Number of 1D Quadrature Points (q) : 4
    Global Nodes                       : 125
    Owned Nodes                        : 125
    DoF per node                       : 1
  BDDC:
    Injection                          : scaled
    Global Interface Nodes             : 8
    Owned Interface Nodes              : 8
  KSP:
    KSP Type                           : cg
    KSP Convergence                    : CONVERGED_RTOL
    Total KSP Iterations               : 26
    Final rnorm                        : 1.132970e-09
  BDDC:
    PC Type                            : shell
  Performance:
    Pointwise Error (max)              : 4.185030e-02
    CG Solve Time                      : 0.0201494 (0.0201494) sec

@jedbrown
Copy link
Member

jedbrown commented Jan 19, 2022 via email

@jeremylt
Copy link
Member Author

Size Its Singular Values
3x3x3 26 max 12.0304 min 0.305409 max/min 39.3913
5x5x5 64 max 12.2401 min 0.264016 max/min 46.3613
10x10x10 79 max 12.4294 min 0.108125 max/min 114.954
15x15x15 110 max 14.8259 min 0.0588987 max/min 251.719

@jedbrown
Copy link
Member

This is for 3D with only corners as primal dofs?

@jeremylt
Copy link
Member Author

Correct, 3D 2nd order basis as the fine mesh with corners only as the vertex space

@jedbrown
Copy link
Member

jedbrown commented Jan 19, 2022 via email

@stefanozampini
Copy link
Contributor

Are you using exact solves? If so, the minimum eigenvalue must be equal to one.

@jedbrown
Copy link
Member

@jeremylt I think we need to compare with ex59, or use the new COO assembly (as in Ratel) to compare with PETSc BDDC (one subdomain per process, but you can run 64 processes on Noether). Matching minimum eigenvalues is important. If the condition number table above still applies on this branch, then we're still missing something.

@jeremylt
Copy link
Member Author

No idea why it changed

Size Its Singular Values
5x5x5 89 max 50.0355 min 0.176477 max/min 283.524
10x10x10 149 max 50.0376 min 0.161565 max/min 309.706
20x20x20 188 max 50.2384 min 0.0984236 max/min 510.43

@jeremylt
Copy link
Member Author

jeremylt commented Jul 7, 2023

Oh cool, some memcheck backend change shows that we have a problem with how we are treating memory. That gives some direction to future debugging efforts here

@jeremylt
Copy link
Member Author

ToDo: Fix assembly to use libCEED native assembly

@jeremylt
Copy link
Member Author

jeremylt commented Aug 17, 2023

Turns out the memcheck issue was unrelated - see the vector fix PR.

I wonder if the convergence issue can in part can be related to the loss of accuracy from using coloring to assemble the schur problem, which is part of why I want to switch to libCEED assembly (and it'll be faster) Nope, I assembled the way I assembled for good reasons since I'm assembling the action of several operators in sequence with other actions in between.

@jeremylt
Copy link
Member Author

Note to self - I'm suspicious about possible pollution of helper function input vectors. Also, the addition of the preconditioning contributions back from the restricted problem looks funko

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants