@Jonatan442
What I do not understand, is related to the following: a large database should not matter, if an application or client (in general) chunks up requests to the database.
It seems to be the case that you are making many requests to the database.
I am pretty sure that most of them are similar or identical.
Your problem would probably not existing and/or is very likely to be solved if
a) code (i.e. php scripts) treats parts of the database entries as "dynamic" (to be requested by clients/applications) and the remainder as "static" (not loaded, unless necessary).
For example, consider a web application, serving content from a database: not all the database data can be served in one (visible) page, some of the data does not have to be loaded.
OR
b) specific database data, that is requested frequently and for which the mysql command is similar/identical, is served via cache, preferably memory based cache (like Redis Cache)
For example, consider a web application, serving similar/identical content frequently from a database: one connection suffices to get the data into the memory based cache (read: only one or just small number of mysql connections) and the memory based cache serves the (similar/identical) content from memory (note: Redis can serve > 10K requests per second).
OR
c) the database is properly indexed, reducing the number of mysql connections to some extent and certainly decreasing the workload on the MySQL server.
In conclusion, there are many ways to enhance the performance of your site, but increasing memory limits is certainly not a good method.
In fact, allowing more memory will (on the one hand) not solve the problem of limited mysql connections and (on the other hand) give more "space" to bad code to make more use of resources, which essentially implies that the problem is aggravated.
I would strongly recommend to review the code and/or introduce caching mechanisms in the applications running on your site.
Hope the above helps and explains a little bit.
Regards......