-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Site hangs #9
Comments
Hey, The key prefix that is in the output of the Did you change the key setting to One thing to note is that ASP.NET will block concurrent requests to the same session. Can you try using JMeter with multiple different sessions? I can help more if you can post a simple ASP.NET sample somewhere that you can hit with JMeter and I'll try to reproduce it. Thanks! |
Hi, yes I did changed the prefix in my webconfig. <sessionState mode="Custom" cookieless="false" timeout="30" customProvider="bc">
<providers>
<clear />
<add name="bc" type="..." />
</providers>
</sessionState> anyway, yes we're using different sessions in jmeter (we have a http cookie manager set up), and i can see the different sessions listed in redis as well. What concerns me is that even when testing locally and manually, with only me accessing the site, occasionally i still ran into the same problem. I'll see if I can get something setup. Thanks! |
Has this issue been played out? We are running into threads blocking because a response is not coming back from the session state provider. Due to a bug this was locking the whole site. The bug has been fixed but I'm still needing to look into the hung requests :| I'm curious if the issue is with the newest Redis client.. It's quite a few versions ahead of 2 years ago. Is there a particular, tested version of ServiceStack.Redis to try? |
I was pulled off to work on another area so I haven't been able to resolve this. As you said, I should probably try again with ServiceStack 3.8.3. |
Hmm, the oldest Nuget package available through the package explorer is 3.9.29 . You wouldn't happen to know how to get a hold of 3.8.3 without building from source? |
All the servicestack dlls are in the lib folder of this repo. Good luck, please let me know how it goes :) |
If either of you can create a reproducable case I'd like to help fix this issue. We don't use the ASP.NET session anymore, but when we were using it we had a significant load without any lockups (or none that we noticed). Right now, we still use SS.Redis v3.9.29 with other areas of our app without any issues. |
Still getting the hangs. Once the threads lock up all the traffic turns into this: 1392072383.569702 "HGETALL" "InnerProvider/vaq43jyhiru0sef0i2z3kost" I may need to build the project in debug mode to get a good stack trace. |
@ProTip Can you reproduce locally with JMeter or other load testing software? Please provide a stack trace if you can! |
The analysis is a lot different with the debug build. I have a lot of threads waiting for information from the server. This particular thread is also holding a lock which is blocking other threads(this has supposedly been fixed):
This one has been active much longer:
|
Are you saying the first issue was fixed in SS.Redis? |
Well, my application is hanging and I'm seeing similar output to the OP. Every time the application hangs most of my threads are waiting on data from Redis.. I was getting this issue with the latest compatible SS.Redis, now I'm using a debug build of the project that uses the bundled SS.Redis version. |
@TheCloudlessSky, what ver of redis are you using in your setup. thanks. |
Hi guys. I believe I have resolved my issue of the threads waiting for data from Redis. Apparently this application/client combination is creating about 5+ connections to the Redis server per thread. I was simulating 100 users so this was creating 500+(minimum) connections running right through the open file limit. I have raised the limit and no longer see the threads waiting for data from Redis. |
@wliao008 We're using 3.9.57. See the details below for our setup. @ProTip That's quite a lot of connections per-thread 💥! What type of Redis client manager are you using (pooled, basic)? Is it configured through the Here's our setup:
var pooledClientManager = new PooledRedisClientManager(redisConnection);
// IMPORTANT: Ninject's default Pipeline/ActivationCache only deactivates (disposes)
// references once. And because the pool returns the same instances, Ninject does not
// dispose of the IRedisClient the 2nd time its requested. This causes the client to never
// become deactivated. Therefore, we wrap the client with the DisposablePooledClient
// which always use a new reference for Ninject to manage and internally disposes of the
// IRedisClient.
// http://stackoverflow.com/questions/17047732/how-can-ninject-be-configured-to-always-deactivate-pooled-references
Bind<PooledRedisClientManager.DisposablePooledClient<RedisClient>>()
.ToMethod(ctx => pooledClientManager.GetDisposableClient<RedisClient>())
.InRequestScope();
Bind<IRedisClient, ICacheClient>()
.ToMethod(ctx => ctx.Kernel.Get<PooledRedisClientManager.DisposablePooledClient<RedisClient>>().Client)
.InTransientScope(); This setup ensures that any part of our application (other than the We then configure the RedisSessionStateStore.SetClientManager(pooledClientManager); So, you shouldn't be creating a lot of connections to the Redis server per-thread if you're using the If you haven't done already, switch to the pooled client manager. If you're using Redis some where else in your application, make sure that you're properly cleaning up the client so that it'll be put back into the pool. |
Thanks @TheCloudlessSky, that's good info! Regarding @ProTip's suggestion of increasing the file limit, on my default Redis instance, we have 3984 max clients, but we still experience the locked up issue load testing with 2 users. I did go back to the initial version of the session provider without the Watch/Unwatch changes, and it seems fine so far, loaded test 100 users without a single problem. |
@wliao008 Hmm, that's weird that the watch/unwatch would be causing it. There's got to be a deeper bug within ServiceStack.Redis then. I don't use key watching in any of our other Redis-based code. For any atomic operation, we use a generational approach (like with NHibernate.Caches.Redis), Redis transactions with MULTI/EXEC (only useful for commands that don't return data and need that data mutated) or use a Lua script that has more logic when necessary (since they are atomic!). I'd like to use the Lua approach, but it'd depend if people are using Redis 2.6.0. However, doing watch/unwatch shouldn't be causing any issues like I said. If so, it'd mean there's a bug in ServiceStack.Redis. |
Unfortunately I don't have direct control over the instantiation ATM. The software I'm deploying wraps the provider and passes the config through. I am specifying "pooled" for the clientType though.. But seeing 500 files for the Redis process under /proc makes me suspicious that it's not being used correctly in that regard with a 100 concurrent user simulation. I believe it may be creating 2 session keys per ASP.NET session though, so that could account for up to 200? Not sure how they get cleaned up after that.. I may try the pre-watch version as well just confirm whether or not that improves my situation. |
I was looking thru the msdn documentation on session provider yesterday trying to find out the steps of execution, and came across this, for the GetItemExclusive method:
In the provider, there's: private SessionStateStoreData GetItem(...)
{
//code removed for brevity
RedisSessionState state;
if (!RedisSessionState.TryParse(stateRaw, out state))
{
client.UnWatch();
return null;
}
//code removed for brevity
} So we changed it to: private SessionStateStoreData GetItem(...)
{
//code removed for brevity
RedisSessionState state;
if (!RedisSessionState.TryParse(stateRaw, out state))
{
client.UnWatch();
locked = false; // <- add this line
return null;
}
//code removed for brevity
} And believe it or not, this seems to fix the problem for us, I've been load testing it on and off with variable number of users and so far did not have any locking issue. |
I just released v1.2.0 which hopefully fixes this problem. I noticed that there were placed where UNWATCH was not being called (because the transaction was not committed -- or even started). Update and give it a spin and please let me know if it fixes this issue. |
v1.3.0 released that uses a distributed lock rather than watch/unwatch. Please update and let me know if this fixes your issues. Also update ServiceStack.Common, ServiceStack.Redis, and ServiceStack.Text to 3.9.71 (The dependencies are now >= 3.9 && < 4.0). |
@TheCloudlessSky you were right about that locked=false! i was so hung up i didn't even noticed it was there already. I'll try out the new ver tomorrow, thanks! |
Need to test further(much futher) but this appears to be much more stable for me. |
Awesome! If you can, try testing with JMeter, Apache Bench or another similar load tool. |
@TheCloudlessSky Hi, I'm testing with gatling and a simulation of 100 concurrent users. I'm very happy with the results so far. Max connections are around 60 now and I have not experienced hangs :) |
@TheCloudlessSky, sorry for the late update. I did try to test out the newer version but we ended up using the older ver in our production. And I've been assigned to work on other stuffs since, so haven't been able to follow it up. As far as I'm concerned this issue is ok for me. Thank you! |
Great, thanks!
|
Hi, I'm having a strange problem after putting the RedisSessionStateStoreProvider in place, occasionally my site would hangs, and if i go on the server and issue a "monitor" command to my redis server, here's a portion of the output:
"watch" "bc/4i0sekkgl3nhis33ddooqyds"
"hgetall" "bc/4i0sekkgl3nhis33ddooqyds"
"unwatch"
"watch" "bc/4i0sekkgl3nhis33ddooqyds"
"hgetall" "bc/4i0sekkgl3nhis33ddooqyds"
"unwatch"
"watch" "bc/4i0sekkgl3nhis33ddooqyds"
"hgetall" "bc/4i0sekkgl3nhis33ddooqyds"
"unwatch"
"watch" "bc/4i0sekkgl3nhis33ddooqyds"
"hgetall" "bc/4i0sekkgl3nhis33ddooqyds"
"unwatch"
"watch" "bc/4i0sekkgl3nhis33ddooqyds"
"hgetall" "bc/4i0sekkgl3nhis33ddooqyds"
"unwatch"
...
...
and it would just keeps going on and on..
doing a hget bc/4i0sekkgl3nhis33ddooqyds locked shows that this key is locked, which I assume is what is causing it to hang?
I run into the same problem when doing load testing with JMeter.
I used the code as is without much modification. I have a master-slave setup (redis v2.8.3). servicestack.redis v3.9.60.
anyone seen this?
Thanks!
-wl
The text was updated successfully, but these errors were encountered: