-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem Getting "Big" Values #27
Comments
From [email protected] on 2014-02-18 18:00:46 On Tue Feb 18 11:59:35 2014, [email protected] wrote:
This test has been written before memcached acquired -I option to raise 1MB limit. The test assumes that values slightly below 1MB will be accepted by the server, but slightly above will be rejected. My guess you run the tests on memcached -I 128m and value >1MB wasn't rejected by the server, hence the test failure.
Can't reproduce. I ran memcached -I 128m -vv (version 1.4.17), adjusted your code to use 100000 * 1024 value, everything worked on Linux (I don't have access to any Windows host so can't test there): $ perl /tmp/a.pl Getting keys with Fast.. Getting keys with Client.. Output from memcached:
Could you please provide the output of memcached -vv when C::M::F fails? |
From [email protected] on 2014-02-19 09:50:58 On 18 February 2014 18:00, Tomash Brechko via RT <
As per the big_value.t file it seems that line 40 isn't about setting
Sure, please find below the output: ...
Nothing seems out of the ordinary, except that the C::M::F get for Thanks, |
From [email protected] on 2014-02-19 12:05:41 On Wed Feb 19 04:50:58 2014, [email protected] wrote:
Yup, you are right, I didn't check with the code. My final wild guess is that your server is too busy and memcached doesn't respond quite fast. Please set io_timeout parameter to zero to disable I/O timeout (default is 1 second), i.e. my $memd_fast = Cache::Memcached::Fast->new ({ servers => [ 'localhost:11211' ], If this won't solve the problem then it would be interesting to know the minimum item size that reveals the problem and also if this size is always the same or varies slightly. However I don't have any further ideas. |
From [email protected] on 2014-02-19 12:11:29 On Wed Feb 19 07:05:41 2014, KROKI wrote:
Actually, by default close_on_error is enabled, so if it was a timeout issue the connetion would be closed. However from your memcached output I don't see close event for 604 at all, which is puzzling. |
From [email protected] on 2014-02-19 12:31:05 On 19 February 2014 12:11, Tomash Brechko via RT <
The server has beefy specs and is otherwise idling, and the same behaviour I've re-run the same script but with only the data_short set and gets and
Running the Memcached server in Which probably explains why the data isn't returned and C::M::F connection Thanks, |
From [email protected] on 2014-02-19 13:35:18 On Wed Feb 19 07:31:05 2014, [email protected] wrote:
I wonder how later the close of 604 happens, simply being after 612 is OK, any delay makes me think of io_timeout.
Those lines mean memcached server is reading (or trying to read) from the client. If they are output during the STORE then this is expected, if after GET then this is puzzling. Please do one more try before giving up: set io_timeout to zero, do gets from Memcached::Client before gets from C::M::F, and with C::M::F do get(data_long) before get(data_short), and finally paste output of memcached -vv here. Upgrading memcached to the latest versions may also help (yes, I know that only C::M::F triggers the problem, but still ;)). |
From [email protected] on 2014-02-19 14:14:02 On 19 February 2014 13:35, Tomash Brechko via RT <
They are definitely happening at the GET stage, not at the STORE.
I've done as requested (first two gets are C::M::F), behaviour is the same
There doesn't seem to be official Windows builds for Memcached any more, I understand if you can't devote more time to find this Windows specific Thanks, |
From [email protected] on 2014-02-19 14:19:43 On 19 February 2014 14:13, Damien Chaumette [email protected]wrote:
Sorry I meant the first two gets are with M::C and the second batch with Thanks, |
From [email protected] on 2014-02-19 15:32:28 Apologies for the spam, I've tried with a Couchbase Memcached bucket (they Thanks, On 19 February 2014 14:19, Damien Chaumette [email protected]wrote:
|
From [email protected] on 2014-02-19 16:17:14 On Wed Feb 19 09:14:02 2014, [email protected] wrote:
C::M::F client sends get request, reads the reply and sends nothing else after that. If I got you correctly then memcached server is trying to read something from the client, and I don't understand why.
What I see from the trace:
C::M::F stored two keys in memcached.
Other client got two keys.
C::M::F requested data_long and got some error at this point, closed the connection, and requested data_short via another connection:
Given that you disabled io_timeout I can only guess what error could happen, but C::M::F definitely got one, otherwise it wouldn't open another connection. So the question is why C::M::F thinks it got an error, and whether its perception is valid.
There's a tiny possibility that C::M::F specifically triggers some race in memcached server that other clients don't, and race fixes happen almost every memcached release (without detailed description what they could affect). But I got your last reply, the problem is reproduceable with the latest memcached.
First of all an easier way would be to use another module like Cache::Memcached::libmemcached (provided that it builds on Windows and works) - should be comparably fast. Debugging Perl modules in C is a pain even on Linux, and simply looking into the code won't reveal much I think because the problem somehow relates to your setup (I built memcached 1.4.5 here but couldn't reproduce). Value reading happens in src/client.c:read_value(). But if you really want to devote time to it, the first thing I would try is to capture network traffic to see actual packet contents (data_long is big, but protocol commands will always be at packet beginnings), and to trace system calls (on Linux we have strace utility that shows system calls and their results; don't know what Windows has). |
From [email protected] on 2014-02-20 09:26:47
I've just had a look at Cache::Memcached::libmemcached on CPAN although Thanks, |
From [email protected] on 2022-06-18 03:32:19 Hello, I just noticed that src/client.c:read_value() returns 4294967295 where it should return -1. My setup:
Thnak you,twata |
From [email protected] on 2022-06-18 03:44:29 Sorry. It was src/client.c:readv_restart(), not src/client.c:read_value() . |
Migrated from rt.cpan.org #93140 (status was 'open')
Requestors:
From [email protected] on 2014-02-18 16:59:35
:
Hi there,
I have been playing around with a few Memcached client libraries recently
and, ideally, would like to stick with Perl, but with decent performance.
Memcached::Client is simple enough to use but performance isn't close
Python or C# libraries offer.
Thinking it was probably due to its pure Perl nature, I then came across
your library.
During the CPAN installation I ran into the following unit test issue: "Failed
test 'Fetch# at t/big_value.t line 40."
With the Memcached server I am running, it's possible to up the default
value-size limit from the default 1MB to 128MB. Doing so and setting the
"max_size" parameter in your library I am able to set very large values
without any problem.
The problem comes when I try to get them out, it just doesn't return
anything.
The key'/value most definitely has been set as I can retrieve it with a get
call from Memcached::Client.
I've placed a code sample here, rather simple but it might help:
http://pastebin.com/QtBxqMhB
My setup:
I've had no problem using my Memcached server build with:
but it's worth nothing that Cache::Memcached throws up quite a bit of
unit tests failures and therefore hasn't been tested.
Let me know if this makes any sense and/or if you need more information.
Thanks,
Damien
The text was updated successfully, but these errors were encountered: