SCM

[#1010995] Big number of Command.Prepare statement cause client disconnection

View Trackers | Bugs | Download .csv | Monitor

Date:
2011-02-22 08:41
Priority:
3
State:
Open
Submitted by:
Torello Querci (mk8)
Assigned to:
Nobody (None)
Npgsql Version:
None
Category:
Group:
Resolution:
None
Summary:
Big number of Command.Prepare statement cause client disconnection

Detailed description
If there are a very big number of Command.Prepare statement, the Npgsql give this error:

Exception ERROR: 53200: out of memory

I got this message after 1290558 insert on my Postgresql 8.4 server.
On Postgresql 8.3 all the memory are used and, at the end, the server hang up.

Here the sample code to get this error (as test I create a "test" database with one table "test" with only one column "value":

=========================
using System;
using System.Data;
using System.Data.Common;
using Npgsql;

namespace CmdPrepareBug
{
class MainClass
{
public static void Main (string[] args)
{
NpgsqlConnection cnn = new
NpgsqlConnection("Server=localhost;Database=test;User
ID=postgres;Password=;");
cnn.Open();
Console.WriteLine ("Database Opened..");

NpgsqlTransaction tran = cnn.BeginTransaction();

NpgsqlCommand cmd;
for (int i=0; i<10000000; ++i) {
try {
cmd = cnn.CreateCommand();
cmd.CommandText = "INSERT INTO test (value) values ("+i+")";
cmd.Prepare();

cmd.Transaction = tran;

cmd.ExecuteNonQuery();
cmd.Dispose();
cmd=null;

if (i % 10000 == 0) {
System.Console.WriteLine("I = "+i);
}
} catch(Exception ex) {
System.Console.WriteLine("Exception "+ex.Message);
System.Console.WriteLine("I="+i);
break;
}
}

tran.Commit();

Console.WriteLine ("Hello World!");
}
}
}

Followup

Message
Date: 2011-02-22 13:28
Sender: Francisco Figueiredo jr.


The problem here is because Npgsql only releases prepared statements when the connection is released. This was done in order to maximize the prepared statements usage. But it is not working very well.

I'm going to fix this in order to release the prepared statement when the command is disposed or better yet use a cache of prepared statements like the jdbc driver does.
Date: 2011-02-22 12:43
Sender: Torello Querci

Of course, this example is designed to get the error.
In real world this is wrong.

Nevertheless, the library should to work, not fill the server
memory.
Date: 2011-02-22 12:36
Sender: Alex A Ermoshenko

Sorry but you query in not the same!
cmd.CommandText = "INSERT INTO test (value) values ("+i+")";

try using parameters, for example:

cmd.CommandText = "INSERT INTO test (value) values (:value)";
NpgsqlParameter uidp = cmd.Parameters.Add("value", NpgsqlTypes.NpgsqlDbType.Integer).Value = i;



Date: 2011-02-22 10:50
Sender: Torello Querci

No, need to be Npgsql to handle this case.
At the moment I not know if the best choice is to have a
maximum of prepared statement or see if it is possible to
have memory information querying the database server.

For example, IIRC, JDBC Postgresql driver have a pool of
prepared statement and if the query is the same use it,
otherwise insert it in the poll. When the pool is full, the
last recently used is dropped.

I suppose that the same behavior can be done in Npgsql.

In the next days I will try to go in depth about this.
Date: 2011-02-22 10:43
Sender: Alex A Ermoshenko

I guess Npgsql developers could not fix this bug... because postgresql that works...

Try play with memory and transaction settings in postgresql.conf

Attached Files:

Changes:

No Changes Have Been Made to This Item

Powered By FusionForge