Ban Management

Ban Management

193k Downloads

Connection Errors on localhost

ryans1230 opened this issue · 26 comments

commented

Last night BM was like a toddler kicking and fussing about everything. Have several minutes of the same errors below.

I know its normally due to slow queries and connections, but our database is on the same dedi as the server so connection shouldn't be an issue. If it is due to slow queries, could you please offer some advice on how to speed up queries.

http://pastebin.com/R8xQDiXi

commented

Memory is at like 8%, cpu is where mine craft always puts it, 100%.
Other plugins are using mysql just fine. BM is getting almost impossible to use. Keeps throwing the same errors.
On Jun 7, 2016 10:47 AM, James Mortemore [email protected] wrote:Check your CPU/memory usage. Check your mysql log for any information.

—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or mute the thread.

commented

What does your MySQL log say?

commented

Check your CPU/memory usage. Check your mysql log for any information.

commented

It hasn't happened since then, might have just been some hiccups in everything.

Although we are getting this error frequently:
2016-06-08 18:20:04 [WARN] org.apache.commons.lang.UnhandledException: Plugin BanManager v5.9.0 generated an exception while executing task 601 at org.bukkit.craftbukkit.v1_9_R2.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:56) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.IllegalStateException: Could not get next result for class me.confuser.banmanager.data.PlayerBanData at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.next(SelectIterator.java:183) at me.confuser.banmanager.storage.PlayerBanStorage.getDuplicates(PlayerBanStorage.java:201) at me.confuser.banmanager.listeners.JoinListener$2.run(JoinListener.java:305) at org.bukkit.craftbukkit.v1_9_R2.scheduler.CraftTask.run(CraftTask.java:71) at org.bukkit.craftbukkit.v1_9_R2.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:53) ... 3 more Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:898) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:887) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:862) at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:743) at com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1037) at me.confuser.banmanager.internal.hikari.pool.HikariProxyResultSet.findColumn(HikariProxyResultSet.java) at me.confuser.banmanager.internal.ormlite.jdbc.JdbcDatabaseResults.findColumn(JdbcDatabaseResults.java:90) at me.confuser.banmanager.internal.ormlite.field.FieldType.resultToJava(FieldType.java:815) at me.confuser.banmanager.internal.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:60) at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.getCurrent(SelectIterator.java:270) at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.nextThrow(SelectIterator.java:161) at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.next(SelectIterator.java:173) ... 7 more

commented

Please check your MySQL logs.

commented

Checked MySQL logs, not errors in regards to the bans database.

commented

10 connections, obviously 3000 timeout

commented

Can you run the following queries against your database and paste the output here?

SHOW PROCESSLIST;
SHOW STATUS LIKE '%conn%';

It should show a list of all connections

commented

http://pastebin.com/QVj3AyB3

Would you like me to do the same the next time it goes into that loop?

commented

Yes please
On 25 Jun 2016 20:18, "ryans1230" [email protected] wrote:

http://pastebin.com/QVj3AyB3

Would you like me to do the same the next time it goes into that loop?


You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
#620 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABlQdc0PNcW-d9NcbGsS-OtzlkesQkbUks5qPX8GgaJpZM4Iv_Bj
.

commented

Alright, noticing a trend that it happens when our server is full and players are getting kicked with the "Server is full" (default CB event)

http://pastebin.com/q3y7t1AN

ETA: And it appears to come and go in waves, inconsistent with how it happens.

commented

When the server is full, is the machine you're running this on start to become overloaded (i.e. CPU and memory spikes)?

commented

Yes GC probably runs every 10-15 seconds, cpu usage usually exceeding 100%

We have two other plugins that use sql and neither of them have issues (one them being LogBlock)

commented

Server log: http://pastebin.com/iWAfxCvg

Could it be that we have over 44k rows in bm_player_mute_records? (602 loaded into memory)

We are now running on Bungee, and none of the other 3 servers using the same database are having this issue, even though some of the muted players are on the other servers

commented

What's your maxConnections set to in your config? 44k rows shouldn't cause any issues as far as I'm aware.

commented

This issue also causes an error while running some commands (mainly /warn)

https://imgur.com/peDMKWD

The console either doesn't have the error, or its somewhere with the connection leak floods.

commented

That last one looks like a chunk issue, something to do with physic events. Are you using localhost or 127? The former uses a unix socket, the latter uses TCP.

commented

127

commented

Can you enable debug within the BM config file? Please note it'll be very noisy and have a lot of messages. Only need it on until the next time this occurs. Hoping it'll help narrow down what's happening.

commented

What's your max number of connections set to within MySQL itself? Are you using localhost or 127.0.0.1 as the ip?

commented

max_connections = 10000 connect_timeout = 5 wait_timeout = 600 max_allowed_packet = 16M thread_cache_size = 128 sort_buffer_size = 4M bulk_insert_buffer_size = 16M tmp_table_size = 32M max_heap_table_size = 32M

Not sure if this crash was caused by this error or another error in BM
http://pastebin.com/fZPv09T1

commented

Phew that's a lot of spam.

There's nothing that set it off, just regular old player queries for data that it does all day every day.

commented

So did the issue occur with debug enabled? If so, could you email the dump?

commented

As per #640 (comment)

Could it then be an issue with a the thread not being able to keep up? As i mentioned before, we have another server using the same database, and while it never fills up to the size of our first server, it doesn't suffer from the same issues running the latest.

As far as I'm aware, most of the debug information at any give times is similar to as follows:

2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,297 [DEBUG] StatementExecutor query of 'SELECT COUNT(*) FROMbm_playersWHEREid= ? ' returned 1 2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,298 [DEBUG] BaseMappedStatement update data with statement 'UPDATEbm_playersSETname= ?,ip= ?,lastSeen= ? WHEREid= ?' and 4 args, changed 1 rows 2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,298 [DEBUG] StatementBuilder built statement SELECT * FROMbm_playersWHEREname= 'AkariSenpai' 2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,298 [DEBUG] BaseMappedStatement prepared statement 'SELECT * FROMbm_playersWHEREname= 'AkariSenpai' ' with 0 args 2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,298 [DEBUG] SelectIterator starting iterator @1252074220 for 'SELECT * FROMbm_playersWHEREname= 'AkariSenpai' ' 2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,298 [DEBUG] SelectIterator closed iterator @1252074220 after 1 rows 2016-06-28 02:10:06 [INFO] 2016-06-28 02:10:06,298 [DEBUG] StatementExecutor query of 'SELECT * FROMbm_playersWHEREname= 'AkariSenpai' ' returned 1 results

commented

I also don't think 1f11d99 fixed #321 as the same problem still occurred.

[WARN] org.apache.commons.lang.UnhandledException: Plugin BanManager v5.10.1 generated an exception while executing task 478 at org.bukkit.craftbukkit.v1_10_R1.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:56) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.IllegalStateException: Errors getting more results of class me.confuser.banmanager.data.PlayerNoteData at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.hasNext(SelectIterator.java:103) at me.confuser.banmanager.listeners.JoinListener$1.run(JoinListener.java:208) at org.bukkit.craftbukkit.v1_10_R1.scheduler.CraftTask.run(CraftTask.java:71) at org.bukkit.craftbukkit.v1_10_R1.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:53) ... 3 more

Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:898) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:887) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:862) at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:743) at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:6320) at me.confuser.banmanager.internal.hikari.pool.HikariProxyResultSet.next(HikariProxyResultSet.java) at me.confuser.banmanager.internal.ormlite.jdbc.JdbcDatabaseResults.next(JdbcDatabaseResults.java:64) at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.hasNextThrow(SelectIterator.java:81) at me.confuser.banmanager.internal.ormlite.stmt.SelectIterator.hasNext(SelectIterator.java:98) ... 6 more

commented

The thread not keeping up shouldn't be an issue. They'll never overlap. Yes, I'm aware #321 isn't yet fixed, which is why #493 exists. The issue is to do with connections, which is why I requested a log dump to see what exactly is causing this, as I cannot reproduce this. I only ever saw this issue twice on Frostcast's network in the space of 6 months, so I've not been able to track this down. A log dump containing the details of when this issue starts occurring would be of great help.