We propose a novel approach for solving partial differential equations (PDEs) based on randomized neural networks and the Petrov–Galerkin method, which we call the RNN-PG methods. This approach uses randomized neural networks to approximate unknown functions and enables a flexible choice of test functions, such as finite element basis functions, Legendre or Chebyshev polynomials, or neural networks. We apply the RNN-PG methods to various problems, including Poisson problems with primal or mixed formulations, and time-dependent problems with a space–time approach. This talk is based on the work on “Communications in Nonlinear Science and Numerical Simulation 127, (2023), 107518”, which is adapted from the work originally posted on arXiv.com by the same authors (arXiv:2201.12995, Jan 31, 2022). The new ingredients include non-linear PDEs such as Burger’s equation and a numerical example of a high-dimensional heat equation. Numerical experiments show that the RNN-PG methods can achieve high accuracy with a small number of degrees of freedom. Furthermore, RNN-PG has several advantages, such as being mesh-free, handling different boundary conditions easily, solving time-dependent problems efficiently, and solving high-dimensional problems quickly. These results indicate the great potential of the RNN-PG methods in the field of numerical methods for PDEs. We will also discuss our recent work on local randomized neural networks with discontinuous Galerkin methods for PDEs (June 11, 2022, arXiv:2206.05577; May 25, 2023, arXiv:2305.16060).