add memcache

This commit is contained in:
Federico Justus Denkena 2023-11-06 16:54:45 +01:00
parent 841f0e9dbd
commit 22037429fd
Signed by: f-denkena
GPG Key ID: 28F91C66EE36F382
60 changed files with 7660 additions and 0 deletions

View File

@ -0,0 +1 @@
pip

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,387 @@
Metadata-Version: 2.1
Name: pymemcache
Version: 4.0.0
Summary: A comprehensive, fast, pure Python memcached client
Home-page: https://github.com/pinterest/pymemcache
Author: Jon Parise
Author-email: jon@pinterest.com
License: Apache License 2.0
Keywords: memcache,client,database
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Topic :: Database
Requires-Python: >=3.7
Description-Content-Type: text/x-rst
License-File: LICENSE.txt
pymemcache
==========
.. image:: https://img.shields.io/pypi/v/pymemcache.svg
:target: https://pypi.python.org/pypi/pymemcache
.. image:: https://readthedocs.org/projects/pymemcache/badge/?version=master
:target: https://pymemcache.readthedocs.io/en/latest/
:alt: Master Documentation Status
A comprehensive, fast, pure-Python memcached client.
pymemcache supports the following features:
* Complete implementation of the memcached text protocol.
* Connections using UNIX sockets, or TCP over IPv4 or IPv6.
* Configurable timeouts for socket connect and send/recv calls.
* Access to the "noreply" flag, which can significantly increase the speed of writes.
* Flexible, modular and simple approach to serialization and deserialization.
* The (optional) ability to treat network and memcached errors as cache misses.
Installing pymemcache
=====================
Install from pip:
.. code-block:: bash
pip install pymemcache
For development, clone from github and run the tests:
.. code-block:: bash
git clone https://github.com/pinterest/pymemcache.git
cd pymemcache
Run the tests (make sure you have a local memcached server running):
.. code-block:: bash
tox
Usage
=====
See the documentation here: https://pymemcache.readthedocs.io/en/latest/
Django
------
Since version 3.2, Django has included a pymemcache-based cache backend.
See `its documentation
<https://docs.djangoproject.com/en/stable/topics/cache/#memcached>`__.
On older Django versions, you can use
`django-pymemcache <https://github.com/django-pymemcache/django-pymemcache>`_.
Comparison with Other Libraries
===============================
pylibmc
-------
The pylibmc library is a wrapper around libmemcached, implemented in C. It is
fast, implements consistent hashing, the full memcached protocol and timeouts.
It does not provide access to the "noreply" flag. It also isn't pure Python,
so using it with libraries like gevent is out of the question, and its
dependency on libmemcached poses challenges (e.g., it must be built against
the same version of libmemcached that it will use at runtime).
python-memcached
----------------
The python-memcached library implements the entire memcached text protocol, has
a single timeout for all socket calls and has a flexible approach to
serialization and deserialization. It is also written entirely in Python, so
it works well with libraries like gevent. However, it is tied to using thread
locals, doesn't implement "noreply", can't treat errors as cache misses and is
slower than both pylibmc and pymemcache. It is also tied to a specific method
for handling clusters of memcached servers.
memcache_client
---------------
The team at mixpanel put together a pure Python memcached client as well. It
has more fine grained support for socket timeouts, only connects to a single
host. However, it doesn't support most of the memcached API (just get, set,
delete and stats), doesn't support "noreply", has no serialization or
deserialization support and can't treat errors as cache misses.
External Links
==============
The memcached text protocol reference page:
https://github.com/memcached/memcached/blob/master/doc/protocol.txt
The python-memcached library (another pure-Python library):
https://github.com/linsomniac/python-memcached
Mixpanel's Blog post about their memcached client for Python:
https://engineering.mixpanel.com/we-went-down-so-we-wrote-a-better-pure-python-memcache-client-b409a9fe07a9
Mixpanel's pure Python memcached client:
https://github.com/mixpanel/memcache_client
Bye-bye python-memcached, hello pymemcache (migration guide)
https://jugmac00.github.io/blog/bye-bye-python-memcached-hello-pymemcache/
Credits
=======
* `Charles Gordon <http://github.com/cgordon>`_
* `Dave Dash <http://github.com/davedash>`_
* `Dan Crosta <http://github.com/dcrosta>`_
* `Julian Berman <http://github.com/Julian>`_
* `Mark Shirley <http://github.com/maspwr>`_
* `Tim Bart <http://github.com/pims>`_
* `Thomas Orozco <http://github.com/krallin>`_
* `Marc Abramowitz <http://github.com/msabramo>`_
* `Marc-Andre Courtois <http://github.com/mcourtois>`_
* `Julien Danjou <http://github.com/jd>`_
* `INADA Naoki <http://github.com/methane>`_
* `James Socol <http://github.com/jsocol>`_
* `Joshua Harlow <http://github.com/harlowja>`_
* `John Anderson <http://github.com/sontek>`_
* `Adam Chainz <http://github.com/adamchainz>`_
* `Ernest W. Durbin III <https://github.com/ewdurbin>`_
* `Remco van Oosterhout <https://github.com/Vhab>`_
* `Nicholas Charriere <https://github.com/nichochar>`_
* `Joe Gordon <https://github.com/jogo>`_
* `Jon Parise <https://github.com/jparise>`_
* `Stephen Rosen <https://github.com/sirosen>`_
* `Feras Alazzeh <https://github.com/FerasAlazzeh>`_
* `Moisés Guimarães de Medeiros <https://github.com/moisesguimaraes>`_
* `Nick Pope <https://github.com/ngnpope>`_
* `Hervé Beraud <https://github.com/4383>`_
* `Martin Jørgensen <https://github.com/martinnj>`_
We're Hiring!
=============
Are you really excited about open-source? Or great software engineering?
Pinterest is `hiring <https://careers.pinterest.com/>`_!
Changelog
=========
New in version 4.0.0
--------------------
* Dropped Python 2 and 3.6 support
`#321 <https://github.com/pinterest/pymemcache/pull/321>`_
`#363 <https://github.com/pinterest/pymemcache/pull/363>`_
* Begin adding typing
* Add pluggable compression serde
`#407 <https://github.com/pinterest/pymemcache/pull/407>`_
New in version 3.5.2
--------------------
* Handle blank ``STAT`` values.
New in version 3.5.1
--------------------
* ``Client.get`` returns the default when using ``ignore_exc`` and if memcached
is unavailable
* Added ``noreply`` support to ``HashClient.flush_all``.
New in version 3.5.0
--------------------
* Sockets are now closed on ``MemcacheUnexpectedCloseError``.
* Added support for TCP keepalive for client sockets on Linux platforms.
* Added retrying mechanisms by wrapping clients.
New in version 3.4.4
--------------------
* Idle connections will be removed from the pool after ``pool_idle_timeout``.
New in version 3.4.3
--------------------
* Fix ``HashClient.{get,set}_many()`` with UNIX sockets.
New in version 3.4.2
--------------------
* Remove trailing space for commands that don't take arguments, such as
``stats``. This was a violation of the memcached protocol.
New in version 3.4.1
--------------------
* CAS operations will now raise ``MemcacheIllegalInputError`` when ``None`` is
given as the ``cas`` value.
New in version 3.4.0
--------------------
* Added IPv6 support for TCP socket connections. Note that IPv6 may be used in
preference to IPv4 when passing a domain name as the host if an IPv6 address
can be resolved for that domain.
* ``HashClient`` now supports UNIX sockets.
New in version 3.3.0
--------------------
* ``HashClient`` can now be imported from the top-level ``pymemcache`` package
(e.g. ``pymemcache.HashClient``).
* ``HashClient.get_many()`` now longer stores ``False`` for missing keys from
unavailable clients. Instead, the result won't contain the key at all.
* Added missing ``HashClient.close()`` and ``HashClient.quit()``.
New in version 3.2.0
--------------------
* ``PooledClient`` and ``HashClient`` now support custom ``Client`` classes
New in version 3.1.1
--------------------
* Improve ``MockMemcacheClient`` to behave even more like ``Client``
New in version 3.1.0
--------------------
* Add TLS support for TCP sockets.
* Fix corner case when dead hashed server comes back alive.
New in version 3.0.1
--------------------
* Make MockMemcacheClient more consistent with the real client.
* Pass ``encoding`` from HashClient to its pooled clients when ``use_pooling``
is enabled.
New in version 3.0.0
--------------------
* The serialization API has been reworked. Instead of consuming a serializer
and deserializer as separate arguments, client objects now expect an argument
``serde`` to be an object which implements ``serialize`` and ``deserialize``
as methods. (``serialize`` and ``deserialize`` are still supported but
considered deprecated.)
* Validate integer inputs for ``expire``, ``delay``, ``incr``, ``decr``, and
``memlimit`` -- non-integer values now raise ``MemcacheIllegalInputError``
* Validate inputs for ``cas`` -- values which are not integers or strings of
0-9 now raise ``MemcacheIllegalInputError``
* Add ``prepend`` and ``append`` support to ``MockMemcacheClient``.
* Add the ``touch`` method to ``HashClient``.
* Added official support for Python 3.8.
New in version 2.2.2
--------------------
* Fix ``long_description`` string in Python packaging.
New in version 2.2.1
--------------------
* Fix ``flags`` when setting multiple differently-typed values at once.
New in version 2.2.0
--------------------
* Drop official support for Python 3.4.
* Use ``setup.cfg`` metadata instead ``setup.py`` config to generate package.
* Add ``default_noreply`` parameter to ``HashClient``.
* Add ``encoding`` parameter to ``Client`` constructors (defaults to ``ascii``).
* Add ``flags`` parameter to write operation methods.
* Handle unicode key values in ``MockMemcacheClient`` correctly.
* Improve ASCII encoding failure exception.
New in version 2.1.1
--------------------
* Fix ``setup.py`` dependency on six already being installed.
New in version 2.1.0
--------------------
* Public classes and exceptions can now be imported from the top-level
``pymemcache`` package (e.g. ``pymemcache.Client``).
`#197 <https://github.com/pinterest/pymemcache/pull/197>`_
* Add UNIX domain socket support and document server connection options.
`#206 <https://github.com/pinterest/pymemcache/pull/206>`_
* Add support for the ``cache_memlimit`` command.
`#211 <https://github.com/pinterest/pymemcache/pull/211>`_
* Commands key are now always sent in their original order.
`#209 <https://github.com/pinterest/pymemcache/pull/209>`_
New in version 2.0.0
--------------------
* Change set_many and set_multi api return value. `#179 <https://github.com/pinterest/pymemcache/pull/179>`_
* Fix support for newbytes from python-future. `#187 <https://github.com/pinterest/pymemcache/pull/187>`_
* Add support for Python 3.7, and drop support for Python 3.3
* Properly batch Client.set_many() call. `#182 <https://github.com/pinterest/pymemcache/pull/182>`_
* Improve _check_key() and _store_cmd() performance. `#183 <https://github.com/pinterest/pymemcache/pull/183>`_
* Properly batch Client.delete_many() call. `#184 <https://github.com/pinterest/pymemcache/pull/184>`_
* Add option to explicitly set pickle version used by serde. `#190 <https://github.com/pinterest/pymemcache/pull/190>`_
New in version 1.4.4
--------------------
* pypy3 to travis test matrix
* full benchmarks in test
* fix flake8 issues
* Have mockmemcacheclient support non-ascii strings
* Switch from using pickle format 0 to the highest available version. See `#156 <https://github.com/pinterest/pymemcache/pull/156>`_
*Warning*: different versions of python have different highest pickle versions: https://docs.python.org/3/library/pickle.html
New in version 1.4.3
--------------------
* Documentation improvements
* Fixed cachedump stats command, see `#103 <https://github.com/pinterest/pymemcache/issues/103>`_
* Honor default_value in HashClient
New in version 1.4.2
--------------------
* Drop support for python 2.6, see `#109 <https://github.com/pinterest/pymemcache/issues/139>`_
New in version 1.4.1
--------------------
* Python 3 serializations fixes `#131 <https://github.com/pinterest/pymemcache/pull/131>`_
* Drop support for pypy3
* Comment cleanup
* Add gets_many to hash_client
* Better checking for illegal chars in key
New in version 1.4.0
--------------------
* Unicode keys support. It is now possible to pass the flag ``allow_unicode_keys`` when creating the clients, thanks @jogo!
* Fixed a bug where PooledClient wasn't following ``default_noreply`` arg set on init, thanks @kols!
* Improved documentation
New in version 1.3.8
--------------------
* use cpickle instead of pickle when possible (python2)
New in version 1.3.7
--------------------
* default parameter on get(key, default=0)
* fixed docs to autogenerate themselves with sphinx
* fix linter to work with python3
* improve error message on illegal Input for the key
* refactor stat parsing
* fix MockMemcacheClient
* fix unicode char in middle of key bug
New in version 1.3.6
--------------------
* Fix flake8 and cleanup tox building
* Fix security vulnerability by sanitizing key input
New in version 1.3.5
--------------------
* Bug fix for HashClient when retries is set to zero.
* Adding the VERSION command to the clients.
New in version 1.3.4
--------------------
* Bug fix for the HashClient that corrects behavior when there are no working servers.
New in version 1.3.3
--------------------
* Adding caching to the Travis build.
* A bug fix for pluggable hashing in HashClient.
* Adding a default_noreply argument to the Client ctor.
New in version 1.3.2
--------------------
* Making the location of Memcache Exceptions backwards compatible.
New in version 1.3.0
--------------------
* Python 3 Support
* Introduced HashClient that uses consistent hasing for allocating keys across many memcached nodes. It also can detect servers going down and rebalance keys across the available nodes.
* Retry sock.recv() when it raises EINTR
New in version 1.2.9
--------------------
* Introduced PooledClient a thread-safe pool of clients

View File

@ -0,0 +1,53 @@
pymemcache-4.0.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
pymemcache-4.0.0.dist-info/LICENSE.txt,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358
pymemcache-4.0.0.dist-info/METADATA,sha256=aJhanppO-xfdUsaGxPNotz1PyKpIQArFxBDzAff40oQ,14360
pymemcache-4.0.0.dist-info/RECORD,,
pymemcache-4.0.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
pymemcache-4.0.0.dist-info/WHEEL,sha256=z9j0xAa_JmUKMpmz72K0ZGALSM_n-wQVmGbleXx2VHg,110
pymemcache-4.0.0.dist-info/top_level.txt,sha256=A7o5woZP9MH_1OrIbwQIsJsB8UcIX4Kcj4IBeuYakx0,11
pymemcache/__init__.py,sha256=PHZ_lmH3ue3R7QKJP-9OMuTl19MVYgDFopMFtSQCNJk,693
pymemcache/__pycache__/__init__.cpython-311.pyc,,
pymemcache/__pycache__/exceptions.cpython-311.pyc,,
pymemcache/__pycache__/fallback.cpython-311.pyc,,
pymemcache/__pycache__/pool.cpython-311.pyc,,
pymemcache/__pycache__/serde.cpython-311.pyc,,
pymemcache/client/__init__.py,sha256=iwKIUkD67iy93gQhhiRDhD1bL-bKVNMKncg8pE7ie04,706
pymemcache/client/__pycache__/__init__.cpython-311.pyc,,
pymemcache/client/__pycache__/base.cpython-311.pyc,,
pymemcache/client/__pycache__/hash.cpython-311.pyc,,
pymemcache/client/__pycache__/murmur3.cpython-311.pyc,,
pymemcache/client/__pycache__/rendezvous.cpython-311.pyc,,
pymemcache/client/__pycache__/retrying.cpython-311.pyc,,
pymemcache/client/base.py,sha256=95JAH7upCHoMO7b0YkOF01O5Xtb2DB2GJZsUSPSVwdw,61383
pymemcache/client/hash.py,sha256=5v_uMGFux8xd7_aw5P7u6GQWb77gqssjJ2bruMxr3Qc,16047
pymemcache/client/murmur3.py,sha256=HaN3Lwzl1e8gn-a8_lTgiEOdfD0cJnPR0FOaSt6iBSA,1465
pymemcache/client/rendezvous.py,sha256=bllKETXBAemaHWXG9nv0eaaUnzzacf-1hScUU8sHTmk,1273
pymemcache/client/retrying.py,sha256=NLMUm--hfBbElQ3o5LK9pubFMH8KlJiDrcSrSrvWiJ8,6443
pymemcache/exceptions.py,sha256=sq1e0N8Qk79eTQWl_8rxDBHWgaSD1mzkbS94iDv6-w8,1252
pymemcache/fallback.py,sha256=g7HlZ_BNaC9R2-lBxUYA2F0hqhFcpvWMttEsDTUOzSo,4186
pymemcache/pool.py,sha256=rSODZOZKb7n29SyNjB0FjtHLkEmpRI0_38Nfz1Mvr4w,4384
pymemcache/serde.py,sha256=1RgrobrPrv_sSY2i80oTMW95jcpeYsMHXspxSzb7xKI,6042
pymemcache/test/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
pymemcache/test/__pycache__/__init__.cpython-311.pyc,,
pymemcache/test/__pycache__/conftest.cpython-311.pyc,,
pymemcache/test/__pycache__/test_benchmark.cpython-311.pyc,,
pymemcache/test/__pycache__/test_client.cpython-311.pyc,,
pymemcache/test/__pycache__/test_client_hash.cpython-311.pyc,,
pymemcache/test/__pycache__/test_client_retry.cpython-311.pyc,,
pymemcache/test/__pycache__/test_compression.cpython-311.pyc,,
pymemcache/test/__pycache__/test_integration.cpython-311.pyc,,
pymemcache/test/__pycache__/test_rendezvous.cpython-311.pyc,,
pymemcache/test/__pycache__/test_serde.cpython-311.pyc,,
pymemcache/test/__pycache__/test_utils.cpython-311.pyc,,
pymemcache/test/__pycache__/utils.cpython-311.pyc,,
pymemcache/test/conftest.py,sha256=9JwSnZg-2A5ceK1getNevPujqPemexd_MOyTSIssw00,3018
pymemcache/test/test_benchmark.py,sha256=3d9xT8WyOt1pVcF_ZOnlRv98_UvscaL-8rnQL7vyDgM,2963
pymemcache/test/test_client.py,sha256=b9C14iEq47ZG8lyQ2G9mFaLo67fHGd7jzU0qALFASzc,56913
pymemcache/test/test_client_hash.py,sha256=GwRWNQQzu2DACQJnlt9MERxB8TTT4wBnivzGDrE06hQ,17202
pymemcache/test/test_client_retry.py,sha256=Ahlog0AREKILTlgRyTB_SmsztPMbRI11KJVoaJcKPqc,10361
pymemcache/test/test_compression.py,sha256=rL6pvXF4EU9eb3BFB1m2J7vch3a43-laE6ktXxYH7tk,5638
pymemcache/test/test_integration.py,sha256=W3Rc4sjnb3VPooZrhSU9F-V-hEmyLvd6wxPL8r9uHVI,12263
pymemcache/test/test_rendezvous.py,sha256=gClzxCNpl4qUotleOsvES5wuto1GU4_OOWXP2C6qEBw,5188
pymemcache/test/test_serde.py,sha256=Ou5JGCRXfvDfnq4LTtAAjgGEcsHXXXCS1x9UaP3IJww,3843
pymemcache/test/test_utils.py,sha256=uaI1scSgfV5LMTstte3iCuxTmZ3-XhlZk7QuHRuBFU4,2576
pymemcache/test/utils.py,sha256=f5JCkYfQ_KLBxrxzgHp-cjop-LeXgwIsvBEV-VqD2DE,6870

View File

@ -0,0 +1,6 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.37.1)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View File

@ -0,0 +1 @@
pymemcache

View File

@ -0,0 +1,14 @@
__version__ = "4.0.0"
from pymemcache.client.base import Client # noqa
from pymemcache.client.base import PooledClient # noqa
from pymemcache.client.hash import HashClient # noqa
from pymemcache.client.base import KeepaliveOpts # noqa
from pymemcache.exceptions import MemcacheError # noqa
from pymemcache.exceptions import MemcacheClientError # noqa
from pymemcache.exceptions import MemcacheUnknownCommandError # noqa
from pymemcache.exceptions import MemcacheIllegalInputError # noqa
from pymemcache.exceptions import MemcacheServerError # noqa
from pymemcache.exceptions import MemcacheUnknownError # noqa
from pymemcache.exceptions import MemcacheUnexpectedCloseError # noqa

View File

@ -0,0 +1,14 @@
# API Backwards compatibility
from pymemcache.client.base import Client # noqa
from pymemcache.client.base import PooledClient # noqa
from pymemcache.client.hash import HashClient # noqa
from pymemcache.client.retrying import RetryingClient # noqa
from pymemcache.exceptions import MemcacheError # noqa
from pymemcache.exceptions import MemcacheClientError # noqa
from pymemcache.exceptions import MemcacheUnknownCommandError # noqa
from pymemcache.exceptions import MemcacheIllegalInputError # noqa
from pymemcache.exceptions import MemcacheServerError # noqa
from pymemcache.exceptions import MemcacheUnknownError # noqa
from pymemcache.exceptions import MemcacheUnexpectedCloseError # noqa

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,447 @@
import collections
import socket
import time
import logging
from pymemcache.client.base import (
Client,
PooledClient,
check_key_helper,
normalize_server_spec,
)
from pymemcache.client.rendezvous import RendezvousHash
from pymemcache.exceptions import MemcacheError
logger = logging.getLogger(__name__)
class HashClient:
"""
A client for communicating with a cluster of memcached servers
"""
#: :class:`Client` class used to create new clients
client_class = Client
def __init__(
self,
servers,
hasher=RendezvousHash,
serde=None,
serializer=None,
deserializer=None,
connect_timeout=None,
timeout=None,
no_delay=False,
socket_module=socket,
socket_keepalive=None,
key_prefix=b"",
max_pool_size=None,
pool_idle_timeout=0,
lock_generator=None,
retry_attempts=2,
retry_timeout=1,
dead_timeout=60,
use_pooling=False,
ignore_exc=False,
allow_unicode_keys=False,
default_noreply=True,
encoding="ascii",
tls_context=None,
):
"""
Constructor.
Args:
servers: list() of tuple(hostname, port) or string containing a UNIX
socket path.
hasher: optional class three functions ``get_node``, ``add_node``,
and ``remove_node``
defaults to Rendezvous (HRW) hash.
use_pooling: use py:class:`.PooledClient` as the default underlying
class. ``max_pool_size`` and ``lock_generator`` can
be used with this. default: False
retry_attempts: Amount of times a client should be tried before it
is marked dead and removed from the pool.
retry_timeout (float): Time in seconds that should pass between retry
attempts.
dead_timeout (float): Time in seconds before attempting to add a node
back in the pool.
encoding: optional str, controls data encoding (defaults to 'ascii').
Further arguments are interpreted as for :py:class:`.Client`
constructor.
"""
self.clients = {}
self.retry_attempts = retry_attempts
self.retry_timeout = retry_timeout
self.dead_timeout = dead_timeout
self.use_pooling = use_pooling
self.key_prefix = key_prefix
self.ignore_exc = ignore_exc
self.allow_unicode_keys = allow_unicode_keys
self._failed_clients = {}
self._dead_clients = {}
self._last_dead_check_time = time.time()
self.hasher = hasher()
self.default_kwargs = {
"connect_timeout": connect_timeout,
"timeout": timeout,
"no_delay": no_delay,
"socket_module": socket_module,
"socket_keepalive": socket_keepalive,
"key_prefix": key_prefix,
"serde": serde,
"serializer": serializer,
"deserializer": deserializer,
"allow_unicode_keys": allow_unicode_keys,
"default_noreply": default_noreply,
"encoding": encoding,
"tls_context": tls_context,
}
if use_pooling is True:
self.default_kwargs.update(
{
"max_pool_size": max_pool_size,
"pool_idle_timeout": pool_idle_timeout,
"lock_generator": lock_generator,
}
)
for server in servers:
self.add_server(normalize_server_spec(server))
self.encoding = encoding
self.tls_context = tls_context
def _make_client_key(self, server):
if isinstance(server, (list, tuple)) and len(server) == 2:
return "%s:%s" % server
return server
def add_server(self, server, port=None) -> None:
# To maintain backward compatibility, if a port is provided, assume
# that server wasn't provided as a (host, port) tuple.
if port is not None:
if not isinstance(server, str):
raise TypeError("Server must be a string when passing port.")
server = (server, port)
_class = PooledClient if self.use_pooling else self.client_class
client = _class(server, **self.default_kwargs)
if self.use_pooling:
client.client_class = self.client_class
key = self._make_client_key(server)
self.clients[key] = client
self.hasher.add_node(key)
def remove_server(self, server, port=None) -> None:
# To maintain backward compatibility, if a port is provided, assume
# that server wasn't provided as a (host, port) tuple.
if port is not None:
if not isinstance(server, str):
raise TypeError("Server must be a string when passing port.")
server = (server, port)
key = self._make_client_key(server)
dead_time = time.time()
self._failed_clients.pop(server)
self._dead_clients[server] = dead_time
self.hasher.remove_node(key)
def _retry_dead(self) -> None:
current_time = time.time()
ldc = self._last_dead_check_time
# We have reached the retry timeout
if current_time - ldc > self.dead_timeout:
candidates = []
for server, dead_time in self._dead_clients.items():
if current_time - dead_time > self.dead_timeout:
candidates.append(server)
for server in candidates:
logger.debug("bringing server back into rotation %s", server)
self.add_server(server)
del self._dead_clients[server]
self._last_dead_check_time = current_time
def _get_client(self, key):
check_key_helper(key, self.allow_unicode_keys, self.key_prefix)
if self._dead_clients:
self._retry_dead()
server = self.hasher.get_node(key)
# We've ran out of servers to try
if server is None:
if self.ignore_exc is True:
return
raise MemcacheError("All servers seem to be down right now")
return self.clients[server]
def _safely_run_func(self, client, func, default_val, *args, **kwargs):
try:
if client.server in self._failed_clients:
# This server is currently failing, lets check if it is in
# retry or marked as dead
failed_metadata = self._failed_clients[client.server]
# we haven't tried our max amount yet, if it has been enough
# time lets just retry using it
if failed_metadata["attempts"] < self.retry_attempts:
failed_time = failed_metadata["failed_time"]
if time.time() - failed_time > self.retry_timeout:
logger.debug("retrying failed server: %s", client.server)
result = func(*args, **kwargs)
# we were successful, lets remove it from the failed
# clients
self._failed_clients.pop(client.server)
return result
return default_val
else:
# We've reached our max retry attempts, we need to mark
# the sever as dead
logger.debug("marking server as dead: %s", client.server)
self.remove_server(client.server)
result = func(*args, **kwargs)
return result
# Connecting to the server fail, we should enter
# retry mode
except OSError:
self._mark_failed_server(client.server)
# if we haven't enabled ignore_exc, don't move on gracefully, just
# raise the exception
if not self.ignore_exc:
raise
return default_val
except Exception:
# any exceptions that aren't socket.error we need to handle
# gracefully as well
if not self.ignore_exc:
raise
return default_val
def _safely_run_set_many(self, client, values, *args, **kwargs):
failed = []
succeeded = []
try:
if client.server in self._failed_clients:
# This server is currently failing, lets check if it is in
# retry or marked as dead
failed_metadata = self._failed_clients[client.server]
# we haven't tried our max amount yet, if it has been enough
# time lets just retry using it
if failed_metadata["attempts"] < self.retry_attempts:
failed_time = failed_metadata["failed_time"]
if time.time() - failed_time > self.retry_timeout:
logger.debug("retrying failed server: %s", client.server)
succeeded, failed, err = self._set_many(
client, values, *args, **kwargs
)
if err is not None:
raise err
# we were successful, lets remove it from the failed
# clients
self._failed_clients.pop(client.server)
return failed
return values.keys()
else:
# We've reached our max retry attempts, we need to mark
# the sever as dead
logger.debug("marking server as dead: %s", client.server)
self.remove_server(client.server)
succeeded, failed, err = self._set_many(client, values, *args, **kwargs)
if err is not None:
raise err
return failed
# Connecting to the server fail, we should enter
# retry mode
except OSError:
self._mark_failed_server(client.server)
# if we haven't enabled ignore_exc, don't move on gracefully, just
# raise the exception
if not self.ignore_exc:
raise
return list(set(values.keys()) - set(succeeded))
except Exception:
# any exceptions that aren't socket.error we need to handle
# gracefully as well
if not self.ignore_exc:
raise
return list(set(values.keys()) - set(succeeded))
def _mark_failed_server(self, server):
# This client has never failed, lets mark it for failure
if server not in self._failed_clients and self.retry_attempts > 0:
self._failed_clients[server] = {
"failed_time": time.time(),
"attempts": 0,
}
# We aren't allowing any retries, we should mark the server as
# dead immediately
elif server not in self._failed_clients and self.retry_attempts <= 0:
self._failed_clients[server] = {
"failed_time": time.time(),
"attempts": 0,
}
logger.debug("marking server as dead %s", server)
self.remove_server(server)
# This client has failed previously, we need to update the metadata
# to reflect that we have attempted it again
else:
failed_metadata = self._failed_clients[server]
failed_metadata["attempts"] += 1
failed_metadata["failed_time"] = time.time()
self._failed_clients[server] = failed_metadata
def _run_cmd(self, cmd, key, default_val, *args, **kwargs):
client = self._get_client(key)
if client is None:
return default_val
func = getattr(client, cmd)
args = list(args)
args.insert(0, key)
return self._safely_run_func(client, func, default_val, *args, **kwargs)
def _set_many(self, client, values, *args, **kwargs):
failed = []
succeeded = []
try:
failed = client.set_many(values, *args, **kwargs)
except Exception as e:
if not self.ignore_exc:
return succeeded, failed, e
succeeded = [key for key in values if key not in failed]
return succeeded, failed, None
def close(self):
for client in self.clients.values():
self._safely_run_func(client, client.close, False)
disconnect_all = close
def set(self, key, *args, **kwargs):
return self._run_cmd("set", key, False, *args, **kwargs)
def get(self, key, default=None, **kwargs):
return self._run_cmd("get", key, default, default=default, **kwargs)
def incr(self, key, *args, **kwargs):
return self._run_cmd("incr", key, False, *args, **kwargs)
def decr(self, key, *args, **kwargs):
return self._run_cmd("decr", key, False, *args, **kwargs)
def set_many(self, values, *args, **kwargs):
client_batches = collections.defaultdict(dict)
failed = []
for key, value in values.items():
client = self._get_client(key)
if client is None:
failed.append(key)
continue
client_batches[client.server][key] = value
for server, values in client_batches.items():
client = self.clients[self._make_client_key(server)]
failed += self._safely_run_set_many(client, values, *args, **kwargs)
return failed
set_multi = set_many
def get_many(self, keys, gets=False, *args, **kwargs):
client_batches = collections.defaultdict(list)
end = {}
for key in keys:
client = self._get_client(key)
if client is None:
continue
client_batches[client.server].append(key)
for server, keys in client_batches.items():
client = self.clients[self._make_client_key(server)]
new_args = list(args)
new_args.insert(0, keys)
if gets:
get_func = client.gets_many
else:
get_func = client.get_many
result = self._safely_run_func(client, get_func, {}, *new_args, **kwargs)
end.update(result)
return end
get_multi = get_many
def gets(self, key, *args, **kwargs):
return self._run_cmd("gets", key, None, *args, **kwargs)
def gets_many(self, keys, *args, **kwargs):
return self.get_many(keys, gets=True, *args, **kwargs)
gets_multi = gets_many
def add(self, key, *args, **kwargs):
return self._run_cmd("add", key, False, *args, **kwargs)
def prepend(self, key, *args, **kwargs):
return self._run_cmd("prepend", key, False, *args, **kwargs)
def append(self, key, *args, **kwargs):
return self._run_cmd("append", key, False, *args, **kwargs)
def delete(self, key, *args, **kwargs):
return self._run_cmd("delete", key, False, *args, **kwargs)
def delete_many(self, keys, *args, **kwargs) -> bool:
for key in keys:
self._run_cmd("delete", key, False, *args, **kwargs)
return True
delete_multi = delete_many
def cas(self, key, *args, **kwargs):
return self._run_cmd("cas", key, False, *args, **kwargs)
def replace(self, key, *args, **kwargs):
return self._run_cmd("replace", key, False, *args, **kwargs)
def touch(self, key, *args, **kwargs):
return self._run_cmd("touch", key, False, *args, **kwargs)
def flush_all(self, *args, **kwargs) -> None:
for client in self.clients.values():
self._safely_run_func(client, client.flush_all, False, *args, **kwargs)
def quit(self) -> None:
for client in self.clients.values():
self._safely_run_func(client, client.quit, False)

View File

@ -0,0 +1,55 @@
def murmur3_32(data, seed=0):
"""MurmurHash3 was written by Austin Appleby, and is placed in the
public domain. The author hereby disclaims copyright to this source
code."""
c1 = 0xCC9E2D51
c2 = 0x1B873593
length = len(data)
h1 = seed
roundedEnd = length & 0xFFFFFFFC # round down to 4 byte block
for i in range(0, roundedEnd, 4):
# little endian load order
k1 = (
(ord(data[i]) & 0xFF)
| ((ord(data[i + 1]) & 0xFF) << 8)
| ((ord(data[i + 2]) & 0xFF) << 16)
| (ord(data[i + 3]) << 24)
)
k1 *= c1
k1 = (k1 << 15) | ((k1 & 0xFFFFFFFF) >> 17) # ROTL32(k1,15)
k1 *= c2
h1 ^= k1
h1 = (h1 << 13) | ((h1 & 0xFFFFFFFF) >> 19) # ROTL32(h1,13)
h1 = h1 * 5 + 0xE6546B64
# tail
k1 = 0
val = length & 0x03
if val == 3:
k1 = (ord(data[roundedEnd + 2]) & 0xFF) << 16
# fallthrough
if val in [2, 3]:
k1 |= (ord(data[roundedEnd + 1]) & 0xFF) << 8
# fallthrough
if val in [1, 2, 3]:
k1 |= ord(data[roundedEnd]) & 0xFF
k1 *= c1
k1 = (k1 << 15) | ((k1 & 0xFFFFFFFF) >> 17) # ROTL32(k1,15)
k1 *= c2
h1 ^= k1
# finalization
h1 ^= length
# fmix(h1)
h1 ^= (h1 & 0xFFFFFFFF) >> 16
h1 *= 0x85EBCA6B
h1 ^= (h1 & 0xFFFFFFFF) >> 13
h1 *= 0xC2B2AE35
h1 ^= (h1 & 0xFFFFFFFF) >> 16
return h1 & 0xFFFFFFFF

View File

@ -0,0 +1,46 @@
from pymemcache.client.murmur3 import murmur3_32
class RendezvousHash:
"""
Implements the Highest Random Weight (HRW) hashing algorithm most
commonly referred to as rendezvous hashing.
Originally developed as part of python-clandestined.
Copyright (c) 2014 Ernest W. Durbin III
"""
def __init__(self, nodes=None, seed=0, hash_function=murmur3_32):
"""
Constructor.
"""
self.nodes = []
self.seed = seed
if nodes is not None:
self.nodes = nodes
self.hash_function = lambda x: hash_function(x, seed)
def add_node(self, node):
if node not in self.nodes:
self.nodes.append(node)
def remove_node(self, node):
if node in self.nodes:
self.nodes.remove(node)
else:
raise ValueError("No such node %s to remove" % (node))
def get_node(self, key):
high_score = -1
winner = None
for node in self.nodes:
score = self.hash_function(f"{node}-{key}")
if score > high_score:
(high_score, winner) = (score, node)
elif score == high_score:
(high_score, winner) = (score, max(str(node), str(winner)))
return winner

View File

@ -0,0 +1,178 @@
""" Module containing the RetryingClient wrapper class. """
from time import sleep
def _ensure_tuple_argument(argument_name, argument_value):
"""
Helper function to ensure the given arguments are tuples of Exceptions (or
subclasses), or can at least be converted to such.
Args:
argument_name: str, name of the argument we're checking, only used for
raising meaningful exceptions.
argument: any, the argument itself.
Returns:
tuple[Exception]: A tuple with the elements from the argument if they are
valid.
Exceptions:
ValueError: If the argument was not None, tuple or Iterable.
ValueError: If any of the elements of the argument is not a subclass of
Exception.
"""
# Ensure the argument is a tuple, set or list.
if argument_value is None:
return tuple()
elif not isinstance(argument_value, (tuple, set, list)):
raise ValueError("%s must be either a tuple, a set or a list." % argument_name)
# Convert the argument before checking contents.
argument_tuple = tuple(argument_value)
# Check that all the elements are actually inherited from Exception.
# (Catchable)
if not all([issubclass(arg, Exception) for arg in argument_tuple]):
raise ValueError(
"%s is only allowed to contain elements that are subclasses of "
"Exception." % argument_name
)
return argument_tuple
class RetryingClient(object):
"""
Client that allows retrying calls for the other clients.
"""
def __init__(
self, client, attempts=2, retry_delay=0, retry_for=None, do_not_retry_for=None
):
"""
Constructor for RetryingClient.
Args:
client: Client|PooledClient|HashClient, inner client to use for
performing actual work.
attempts: optional int, how many times to attempt an action before
failing. Must be 1 or above. Defaults to 2.
retry_delay: optional int|float, how many seconds to sleep between
each attempt.
Defaults to 0.
retry_for: optional None|tuple|set|list, what exceptions to
allow retries for. Will allow retries for all exceptions if None.
Example:
`(MemcacheClientError, MemcacheUnexpectedCloseError)`
Accepts any class that is a subclass of Exception.
Defaults to None.
do_not_retry_for: optional None|tuple|set|list, what
exceptions should be retried. Will not block retries for any
Exception if None.
Example:
`(IOError, MemcacheIllegalInputError)`
Accepts any class that is a subclass of Exception.
Defaults to None.
Exceptions:
ValueError: If `attempts` is not 1 or above.
ValueError: If `retry_for` or `do_not_retry_for` is not None, tuple or
Iterable.
ValueError: If any of the elements of `retry_for` or
`do_not_retry_for` is not a subclass of Exception.
ValueError: If there is any overlap between `retry_for` and
`do_not_retry_for`.
"""
if attempts < 1:
raise ValueError(
"`attempts` argument must be at least 1. "
"Otherwise no attempts are made."
)
self._client = client
self._attempts = attempts
self._retry_delay = retry_delay
self._retry_for = _ensure_tuple_argument("retry_for", retry_for)
self._do_not_retry_for = _ensure_tuple_argument(
"do_not_retry_for", do_not_retry_for
)
# Verify no overlap in the go/no-go exception collections.
for exc_class in self._retry_for:
if exc_class in self._do_not_retry_for:
raise ValueError(
'Exception class "%s" was present in both `retry_for` '
"and `do_not_retry_for`. Any exception class is only "
"allowed in a single argument." % repr(exc_class)
)
# Take dir from the client to speed up future checks.
self._client_dir = dir(self._client)
def _retry(self, name, func, *args, **kwargs):
"""
Workhorse function, handles retry logic.
Args:
name: str, Name of the function called.
func: callable, the function to retry.
*args: args, array arguments to pass to the function.
**kwargs: kwargs, keyword arguments to pass to the function.
"""
for attempt in range(self._attempts):
try:
result = func(*args, **kwargs)
return result
except Exception as exc:
# Raise the exception to caller if either is met:
# - We've used the last attempt.
# - self._retry_for is set, and we do not match.
# - self._do_not_retry_for is set, and we do match.
# - name is not actually a member of the client class.
if (
attempt >= self._attempts - 1
or (self._retry_for and not isinstance(exc, self._retry_for))
or (
self._do_not_retry_for
and isinstance(exc, self._do_not_retry_for)
)
or name not in self._client_dir
):
raise exc
# Sleep and try again.
sleep(self._retry_delay)
# This is the real magic soup of the class, we catch anything that isn't
# strictly defined for ourselves and pass it on to whatever client we've
# been given.
def __getattr__(self, name):
return lambda *args, **kwargs: self._retry(
name, self._client.__getattribute__(name), *args, **kwargs
)
# We implement these explicitly because they're "magic" functions and won't
# get passed on by __getattr__.
def __dir__(self):
return self._client_dir
# These magics are copied from the base client.
def __setitem__(self, key, value):
self.set(key, value, noreply=True)
def __getitem__(self, key):
value = self.get(key)
if value is None:
raise KeyError
return value
def __delitem__(self, key):
self.delete(key, noreply=True)

View File

@ -0,0 +1,45 @@
class MemcacheError(Exception):
"Base exception class"
pass
class MemcacheClientError(MemcacheError):
"""Raised when memcached fails to parse the arguments to a request, likely
due to a malformed key and/or value, a bug in this library, or a version
mismatch with memcached."""
pass
class MemcacheUnknownCommandError(MemcacheClientError):
"""Raised when memcached fails to parse a request, likely due to a bug in
this library or a version mismatch with memcached."""
pass
class MemcacheIllegalInputError(MemcacheClientError):
"""Raised when a key or value is not legal for Memcache (see the class docs
for Client for more details)."""
pass
class MemcacheServerError(MemcacheError):
"""Raised when memcached reports a failure while processing a request,
likely due to a bug or transient issue in memcached."""
pass
class MemcacheUnknownError(MemcacheError):
"""Raised when this library receives a response from memcached that it
cannot parse, likely due to a bug in this library or a version mismatch
with memcached."""
pass
class MemcacheUnexpectedCloseError(MemcacheServerError):
"Raised when the connection with memcached closes unexpectedly."
pass

View File

@ -0,0 +1,123 @@
# Copyright 2012 Pinterest.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A client for falling back to older memcached servers when performing reads.
It is sometimes necessary to deploy memcached on new servers, or with a
different configuration. In these cases, it is undesirable to start up an
empty memcached server and point traffic to it, since the cache will be cold,
and the backing store will have a large increase in traffic.
This class attempts to solve that problem by providing an interface identical
to the Client interface, but which can fall back to older memcached servers
when reads to the primary server fail. The approach for upgrading memcached
servers or configuration then becomes:
1. Deploy a new host (or fleet) with memcached, possibly with a new
configuration.
2. From your application servers, use FallbackClient to write and read from
the new cluster, and to read from the old cluster when there is a miss in
the new cluster.
3. Wait until the new cache is warm enough to support the load.
4. Switch from FallbackClient to a regular Client library for doing all
reads and writes to the new cluster.
5. Take down the old cluster.
Best Practices:
---------------
- Make sure that the old client has "ignore_exc" set to True, so that it
treats failures like cache misses. That will allow you to take down the
old cluster before you switch away from FallbackClient.
"""
class FallbackClient:
def __init__(self, caches):
assert len(caches) > 0
self.caches = caches
def close(self):
"Close each of the memcached clients"
for cache in self.caches:
cache.close()
def set(self, key, value, expire=0, noreply=True):
self.caches[0].set(key, value, expire, noreply)
def add(self, key, value, expire=0, noreply=True):
self.caches[0].add(key, value, expire, noreply)
def replace(self, key, value, expire=0, noreply=True):
self.caches[0].replace(key, value, expire, noreply)
def append(self, key, value, expire=0, noreply=True):
self.caches[0].append(key, value, expire, noreply)
def prepend(self, key, value, expire=0, noreply=True):
self.caches[0].prepend(key, value, expire, noreply)
def cas(self, key, value, cas, expire=0, noreply=True):
self.caches[0].cas(key, value, cas, expire, noreply)
def get(self, key):
for cache in self.caches:
result = cache.get(key)
if result is not None:
return result
return None
def get_many(self, keys):
for cache in self.caches:
result = cache.get_many(keys)
if result:
return result
return []
def gets(self, key):
for cache in self.caches:
result = cache.gets(key)
if result is not None:
return result
return None
def gets_many(self, keys):
for cache in self.caches:
result = cache.gets_many(keys)
if result:
return result
return []
def delete(self, key, noreply=True):
self.caches[0].delete(key, noreply)
def incr(self, key, value, noreply=True):
self.caches[0].incr(key, value, noreply)
def decr(self, key, value, noreply=True):
self.caches[0].decr(key, value, noreply)
def touch(self, key, expire=0, noreply=True):
self.caches[0].touch(key, expire, noreply)
def stats(self):
# TODO: ??
pass
def flush_all(self, delay=0, noreply=True):
self.caches[0].flush_all(delay, noreply)
def quit(self):
# TODO: ??
pass

View File

@ -0,0 +1,134 @@
# Copyright 2015 Yahoo.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import contextlib
import threading
import time
from typing import Callable, Optional, TypeVar, Deque, List, Generic, Iterator
T = TypeVar("T")
class ObjectPool(Generic[T]):
"""A pool of objects that release/creates/destroys as needed."""
def __init__(
self,
obj_creator: Callable[[], T],
after_remove: Optional[Callable] = None,
max_size: Optional[int] = None,
idle_timeout: int = 0,
lock_generator: Optional[Callable] = None,
):
self._used_objs: Deque[T] = collections.deque()
self._free_objs: Deque[T] = collections.deque()
self._obj_creator = obj_creator
if lock_generator is None:
self._lock = threading.Lock()
else:
self._lock = lock_generator()
self._after_remove = after_remove
max_size = max_size or 2**31
if not isinstance(max_size, int) or max_size < 0:
raise ValueError('"max_size" must be a positive integer')
self.max_size = max_size
self.idle_timeout = idle_timeout
if idle_timeout:
self._idle_clock = time.time
else:
self._idle_clock = float
@property
def used(self):
return tuple(self._used_objs)
@property
def free(self):
return tuple(self._free_objs)
@contextlib.contextmanager
def get_and_release(self, destroy_on_fail=False) -> Iterator[T]:
obj = self.get()
try:
yield obj
except Exception:
if not destroy_on_fail:
self.release(obj)
else:
self.destroy(obj)
raise
self.release(obj)
def get(self):
with self._lock:
# Find a free object, removing any that have idled for too long.
now = self._idle_clock()
while self._free_objs:
obj = self._free_objs.popleft()
if now - obj._last_used <= self.idle_timeout:
break
if self._after_remove is not None:
self._after_remove(obj)
else:
# No free objects, create a new one.
curr_count = len(self._used_objs)
if curr_count >= self.max_size:
raise RuntimeError(
"Too many objects," " %s >= %s" % (curr_count, self.max_size)
)
obj = self._obj_creator()
self._used_objs.append(obj)
obj._last_used = now
return obj
def destroy(self, obj, silent=True) -> None:
was_dropped = False
with self._lock:
try:
self._used_objs.remove(obj)
was_dropped = True
except ValueError:
if not silent:
raise
if was_dropped and self._after_remove is not None:
self._after_remove(obj)
def release(self, obj, silent=True) -> None:
with self._lock:
try:
self._used_objs.remove(obj)
self._free_objs.append(obj)
obj._last_used = self._idle_clock()
except ValueError:
if not silent:
raise
def clear(self) -> None:
if self._after_remove is not None:
needs_destroy: List[T] = []
with self._lock:
needs_destroy.extend(self._used_objs)
needs_destroy.extend(self._free_objs)
self._free_objs.clear()
self._used_objs.clear()
for obj in needs_destroy:
self._after_remove(obj)
else:
with self._lock:
self._free_objs.clear()
self._used_objs.clear()

View File

@ -0,0 +1,193 @@
# Copyright 2012 Pinterest.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pickle
import zlib
from functools import partial
from io import BytesIO
FLAG_BYTES = 0
FLAG_PICKLE = 1 << 0
FLAG_INTEGER = 1 << 1
FLAG_LONG = 1 << 2
FLAG_COMPRESSED = 1 << 3
FLAG_TEXT = 1 << 4
# Pickle protocol version (highest available to runtime)
# Warning with `0`: If somewhere in your value lies a slotted object,
# ie defines `__slots__`, even if you do not include it in your pickleable
# state via `__getstate__`, python will complain with something like:
# TypeError: a class that defines __slots__ without defining __getstate__
# cannot be pickled
DEFAULT_PICKLE_VERSION = pickle.HIGHEST_PROTOCOL
def _python_memcache_serializer(key, value, pickle_version=None):
flags = 0
value_type = type(value)
# Check against exact types so that subclasses of native types will be
# restored as their native type
if value_type is bytes:
pass
elif value_type is str:
flags |= FLAG_TEXT
value = value.encode("utf8")
elif value_type is int:
flags |= FLAG_INTEGER
value = "%d" % value
else:
flags |= FLAG_PICKLE
output = BytesIO()
pickler = pickle.Pickler(output, pickle_version)
pickler.dump(value)
value = output.getvalue()
return value, flags
def get_python_memcache_serializer(pickle_version: int = DEFAULT_PICKLE_VERSION):
"""Return a serializer using a specific pickle version"""
return partial(_python_memcache_serializer, pickle_version=pickle_version)
python_memcache_serializer = get_python_memcache_serializer()
def python_memcache_deserializer(key, value, flags):
if flags == 0:
return value
elif flags & FLAG_TEXT:
return value.decode("utf8")
elif flags & FLAG_INTEGER:
return int(value)
elif flags & FLAG_LONG:
return int(value)
elif flags & FLAG_PICKLE:
try:
buf = BytesIO(value)
unpickler = pickle.Unpickler(buf)
return unpickler.load()
except Exception:
logging.info("Pickle error", exc_info=True)
return None
return value
class PickleSerde:
"""
An object which implements the serialization/deserialization protocol for
:py:class:`pymemcache.client.base.Client` and its descendants using the
:mod:`pickle` module.
Serialization and deserialization are implemented as methods of this class.
To implement a custom serialization/deserialization method for pymemcache,
you should implement the same interface as the one provided by this object
-- :py:meth:`pymemcache.serde.PickleSerde.serialize` and
:py:meth:`pymemcache.serde.PickleSerde.deserialize`. Then,
pass your custom object to the pymemcache client object in place of
`PickleSerde`.
For more details on the serialization protocol, see the class documentation
for :py:class:`pymemcache.client.base.Client`
"""
def __init__(self, pickle_version: int = DEFAULT_PICKLE_VERSION) -> None:
self._serialize_func = get_python_memcache_serializer(pickle_version)
def serialize(self, key, value):
return self._serialize_func(key, value)
def deserialize(self, key, value, flags):
return python_memcache_deserializer(key, value, flags)
pickle_serde = PickleSerde()
class CompressedSerde:
"""
An object which implements the serialization/deserialization protocol for
:py:class:`pymemcache.client.base.Client` and its descendants with
configurable compression.
"""
def __init__(
self,
compress=zlib.compress,
decompress=zlib.decompress,
serde=pickle_serde,
# Discovered via the `test_optimal_compression_length` test.
min_compress_len=400,
):
self._serde = serde
self._compress = compress
self._decompress = decompress
self._min_compress_len = min_compress_len
def serialize(self, key, value):
value, flags = self._serde.serialize(key, value)
if len(value) > self._min_compress_len > 0:
old_value = value
value = self._compress(value)
# Don't use the compressed value if our end result is actually
# larger uncompressed.
if len(old_value) < len(value):
value = old_value
else:
flags |= FLAG_COMPRESSED
return value, flags
def deserialize(self, key, value, flags):
if flags & FLAG_COMPRESSED:
value = self._decompress(value)
value = self._serde.deserialize(key, value, flags)
return value
compressed_serde = CompressedSerde()
class LegacyWrappingSerde:
"""
This class defines how to wrap legacy de/serialization functions into a
'serde' object which implements '.serialize' and '.deserialize' methods.
It is used automatically by pymemcache.client.base.Client when the
'serializer' or 'deserializer' arguments are given.
The serializer_func and deserializer_func are expected to be None in the
case that they are missing.
"""
def __init__(self, serializer_func, deserializer_func) -> None:
self.serialize = serializer_func or self._default_serialize
self.deserialize = deserializer_func or self._default_deserialize
def _default_serialize(self, key, value):
return value, 0
def _default_deserialize(self, key, value, flags):
return value

View File

@ -0,0 +1,116 @@
import os.path
import socket
import ssl
import pytest
def pytest_addoption(parser):
parser.addoption(
"--server", action="store", default="localhost", help="memcached server"
)
parser.addoption(
"--port", action="store", default="11211", help="memcached server port"
)
parser.addoption(
"--tls-server", action="store", default="localhost", help="TLS memcached server"
)
parser.addoption(
"--tls-port", action="store", default="11212", help="TLS memcached server port"
)
parser.addoption(
"--size", action="store", default=1024, help="size of data in benchmarks"
)
parser.addoption(
"--count",
action="store",
default=10000,
help="number of iterations to run each benchmark",
)
parser.addoption(
"--keys",
action="store",
default=20,
help="number of keys to use for multi benchmarks",
)
@pytest.fixture(scope="session")
def host(request):
return request.config.option.server
@pytest.fixture(scope="session")
def port(request):
return int(request.config.option.port)
@pytest.fixture(scope="session")
def tls_host(request):
return request.config.option.tls_server
@pytest.fixture(scope="session")
def tls_port(request):
return int(request.config.option.tls_port)
@pytest.fixture(scope="session")
def size(request):
return int(request.config.option.size)
@pytest.fixture(scope="session")
def count(request):
return int(request.config.option.count)
@pytest.fixture(scope="session")
def keys(request):
return int(request.config.option.keys)
@pytest.fixture(scope="session")
def pairs(size, keys):
return {"pymemcache_test:%d" % i: "X" * size for i in range(keys)}
@pytest.fixture(scope="session")
def tls_context():
return ssl.create_default_context(
cafile=os.path.join(os.path.dirname(__file__), "certs/ca-root.crt")
)
def pytest_generate_tests(metafunc):
if "socket_module" in metafunc.fixturenames:
socket_modules = [socket]
try:
from gevent import socket as gevent_socket # type: ignore
except ImportError:
print("Skipping gevent (not installed)")
else:
socket_modules.append(gevent_socket)
metafunc.parametrize("socket_module", socket_modules)
if "client_class" in metafunc.fixturenames:
from pymemcache.client.base import Client, PooledClient
from pymemcache.client.hash import HashClient
class HashClientSingle(HashClient):
def __init__(self, server, *args, **kwargs):
super().__init__([server], *args, **kwargs)
metafunc.parametrize("client_class", [Client, PooledClient, HashClientSingle])
if "key_prefix" in metafunc.fixturenames:
mark = metafunc.definition.get_closest_marker("parametrize")
if not mark or "key_prefix" not in mark.args[0]:
metafunc.parametrize("key_prefix", [b"", b"prefix"])

View File

@ -0,0 +1,118 @@
# Copyright 2012 Pinterest.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import pytest
try:
import pylibmc # type: ignore
HAS_PYLIBMC = True
except Exception:
HAS_PYLIBMC = False
try:
import memcache # type: ignore
HAS_MEMCACHE = True
except Exception:
HAS_MEMCACHE = False
try:
import pymemcache.client
HAS_PYMEMCACHE = True
except Exception:
HAS_PYMEMCACHE = False
@pytest.fixture(
params=[
"pylibmc",
"memcache",
"pymemcache",
]
)
def client(request, host, port):
if request.param == "pylibmc":
if not HAS_PYLIBMC:
pytest.skip("requires pylibmc")
client = pylibmc.Client([f"{host}:{port}"])
client.behaviors = {"tcp_nodelay": True}
elif request.param == "memcache":
if not HAS_MEMCACHE:
pytest.skip("requires python-memcached")
client = memcache.Client([f"{host}:{port}"])
elif request.param == "pymemcache":
if not HAS_PYMEMCACHE:
pytest.skip("requires pymemcache")
client = pymemcache.client.Client((host, port))
else:
pytest.skip(f"unknown library {request.param}")
client.flush_all()
return client
def benchmark(count, func, *args, **kwargs):
start = time.time()
for _ in range(count):
result = func(*args, **kwargs)
duration = time.time() - start
print(str(duration))
return result
@pytest.mark.benchmark()
def test_bench_get(request, client, pairs, count):
key = "pymemcache_test:0"
value = pairs[key]
client.set(key, value)
benchmark(count, client.get, key)
@pytest.mark.benchmark()
def test_bench_set(request, client, pairs, count):
key = "pymemcache_test:0"
value = pairs[key]
benchmark(count, client.set, key, value)
@pytest.mark.benchmark()
def test_bench_get_multi(request, client, pairs, count):
client.set_multi(pairs)
benchmark(count, client.get_multi, list(pairs))
@pytest.mark.benchmark()
def test_bench_set_multi(request, client, pairs, count):
benchmark(count, client.set_multi, pairs)
@pytest.mark.benchmark()
def test_bench_delete(request, client, pairs, count):
benchmark(count, client.delete, next(pairs))
@pytest.mark.benchmark()
def test_bench_delete_multi(request, client, pairs, count):
# deleting missing key takes the same work client-side as real keys
benchmark(count, client.delete_multi, list(pairs.keys()))

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,519 @@
from pymemcache.client.hash import HashClient
from pymemcache.client.base import Client, PooledClient
from pymemcache.exceptions import MemcacheError, MemcacheUnknownError
from pymemcache import pool
from .test_client import ClientTestMixin, MockSocket
import unittest
import os
import pytest
from unittest import mock
import socket
class TestHashClient(ClientTestMixin, unittest.TestCase):
def make_client_pool(self, hostname, mock_socket_values, serializer=None, **kwargs):
mock_client = Client(hostname, serializer=serializer, **kwargs)
mock_client.sock = MockSocket(mock_socket_values)
client = PooledClient(hostname, serializer=serializer)
client.client_pool = pool.ObjectPool(lambda: mock_client)
return mock_client
def make_client(self, *mock_socket_values, **kwargs):
current_port = 11012
client = HashClient([], **kwargs)
ip = "127.0.0.1"
for vals in mock_socket_values:
s = f"{ip}:{current_port}"
c = self.make_client_pool((ip, current_port), vals, **kwargs)
client.clients[s] = c
client.hasher.add_node(s)
current_port += 1
return client
def make_unix_client(self, sockets, *mock_socket_values, **kwargs):
client = HashClient([], **kwargs)
for socket_, vals in zip(sockets, mock_socket_values):
c = self.make_client_pool(socket_, vals, **kwargs)
client.clients[socket_] = c
client.hasher.add_node(socket_)
return client
def test_setup_client_without_pooling(self):
client_class = "pymemcache.client.hash.HashClient.client_class"
with mock.patch(client_class) as internal_client:
client = HashClient([], timeout=999, key_prefix="foo_bar_baz")
client.add_server(("127.0.0.1", "11211"))
assert internal_client.call_args[0][0] == ("127.0.0.1", "11211")
kwargs = internal_client.call_args[1]
assert kwargs["timeout"] == 999
assert kwargs["key_prefix"] == "foo_bar_baz"
def test_get_many_unix(self):
pid = os.getpid()
sockets = [
"/tmp/pymemcache.1.%d" % pid,
"/tmp/pymemcache.2.%d" % pid,
]
client = self.make_unix_client(
sockets,
*[
[
b"STORED\r\n",
b"VALUE key3 0 6\r\nvalue2\r\nEND\r\n",
],
[
b"STORED\r\n",
b"VALUE key1 0 6\r\nvalue1\r\nEND\r\n",
],
],
)
def get_clients(key):
if key == b"key3":
return client.clients["/tmp/pymemcache.1.%d" % pid]
else:
return client.clients["/tmp/pymemcache.2.%d" % pid]
client._get_client = get_clients
result = client.set(b"key1", b"value1", noreply=False)
result = client.set(b"key3", b"value2", noreply=False)
result = client.get_many([b"key1", b"key3"])
assert result == {b"key1": b"value1", b"key3": b"value2"}
def test_get_many_all_found(self):
client = self.make_client(
*[
[
b"STORED\r\n",
b"VALUE key3 0 6\r\nvalue2\r\nEND\r\n",
],
[
b"STORED\r\n",
b"VALUE key1 0 6\r\nvalue1\r\nEND\r\n",
],
]
)
def get_clients(key):
if key == b"key3":
return client.clients["127.0.0.1:11012"]
else:
return client.clients["127.0.0.1:11013"]
client._get_client = get_clients
result = client.set(b"key1", b"value1", noreply=False)
result = client.set(b"key3", b"value2", noreply=False)
result = client.get_many([b"key1", b"key3"])
assert result == {b"key1": b"value1", b"key3": b"value2"}
def test_get_many_some_found(self):
client = self.make_client(
*[
[
b"END\r\n",
],
[
b"STORED\r\n",
b"VALUE key1 0 6\r\nvalue1\r\nEND\r\n",
],
]
)
def get_clients(key):
if key == b"key3":
return client.clients["127.0.0.1:11012"]
else:
return client.clients["127.0.0.1:11013"]
client._get_client = get_clients
result = client.set(b"key1", b"value1", noreply=False)
result = client.get_many([b"key1", b"key3"])
assert result == {b"key1": b"value1"}
def test_get_many_bad_server_data(self):
client = self.make_client(
*[
[
b"STORED\r\n",
b"VAXLUE key3 0 6\r\nvalue2\r\nEND\r\n",
],
[
b"STORED\r\n",
b"VAXLUE key1 0 6\r\nvalue1\r\nEND\r\n",
],
]
)
def get_clients(key):
if key == b"key3":
return client.clients["127.0.0.1:11012"]
else:
return client.clients["127.0.0.1:11013"]
client._get_client = get_clients
with pytest.raises(MemcacheUnknownError):
client.set(b"key1", b"value1", noreply=False)
client.set(b"key3", b"value2", noreply=False)
client.get_many([b"key1", b"key3"])
def test_get_many_bad_server_data_ignore(self):
client = self.make_client(
*[
[
b"STORED\r\n",
b"VAXLUE key3 0 6\r\nvalue2\r\nEND\r\n",
],
[
b"STORED\r\n",
b"VAXLUE key1 0 6\r\nvalue1\r\nEND\r\n",
],
],
ignore_exc=True,
)
def get_clients(key):
if key == b"key3":
return client.clients["127.0.0.1:11012"]
else:
return client.clients["127.0.0.1:11013"]
client._get_client = get_clients
client.set(b"key1", b"value1", noreply=False)
client.set(b"key3", b"value2", noreply=False)
result = client.get_many([b"key1", b"key3"])
assert result == {}
def test_gets_many(self):
client = self.make_client(
*[
[
b"STORED\r\n",
b"VALUE key3 0 6 1\r\nvalue2\r\nEND\r\n",
],
[
b"STORED\r\n",
b"VALUE key1 0 6 1\r\nvalue1\r\nEND\r\n",
],
]
)
def get_clients(key):
if key == b"key3":
return client.clients["127.0.0.1:11012"]
else:
return client.clients["127.0.0.1:11013"]
client._get_client = get_clients
assert client.set(b"key1", b"value1", noreply=False) is True
assert client.set(b"key3", b"value2", noreply=False) is True
result = client.gets_many([b"key1", b"key3"])
assert result == {b"key1": (b"value1", b"1"), b"key3": (b"value2", b"1")}
def test_touch_not_found(self):
client = self.make_client([b"NOT_FOUND\r\n"])
result = client.touch(b"key", noreply=False)
assert result is False
def test_touch_no_expiry_found(self):
client = self.make_client([b"TOUCHED\r\n"])
result = client.touch(b"key", noreply=False)
assert result is True
def test_touch_with_expiry_found(self):
client = self.make_client([b"TOUCHED\r\n"])
result = client.touch(b"key", 1, noreply=False)
assert result is True
def test_close(self):
client = self.make_client([])
assert all(c.sock is not None for c in client.clients.values())
result = client.close()
assert result is None
assert all(c.sock is None for c in client.clients.values())
def test_quit(self):
client = self.make_client([])
assert all(c.sock is not None for c in client.clients.values())
result = client.quit()
assert result is None
assert all(c.sock is None for c in client.clients.values())
def test_no_servers_left(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[], use_pooling=True, ignore_exc=True, timeout=1, connect_timeout=1
)
hashed_client = client._get_client("foo")
assert hashed_client is None
def test_no_servers_left_raise_exception(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[], use_pooling=True, ignore_exc=False, timeout=1, connect_timeout=1
)
with pytest.raises(MemcacheError) as e:
client._get_client("foo")
assert str(e.value) == "All servers seem to be down right now"
def test_unavailable_servers_zero_retry_raise_exception(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[("example.com", 11211)],
use_pooling=True,
ignore_exc=False,
retry_attempts=0,
timeout=1,
connect_timeout=1,
)
with pytest.raises(socket.error):
client.get("foo")
def test_no_servers_left_with_commands_return_default_value(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[], use_pooling=True, ignore_exc=True, timeout=1, connect_timeout=1
)
result = client.get("foo")
assert result is None
result = client.get("foo", default="default")
assert result == "default"
result = client.set("foo", "bar")
assert result is False
def test_no_servers_left_return_positional_default(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[], use_pooling=True, ignore_exc=True, timeout=1, connect_timeout=1
)
# Ensure compatibility with clients that pass the default as a
# positional argument
result = client.get("foo", "default")
assert result == "default"
def test_no_servers_left_with_set_many(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[], use_pooling=True, ignore_exc=True, timeout=1, connect_timeout=1
)
result = client.set_many({"foo": "bar"})
assert result == ["foo"]
def test_no_servers_left_with_get_many(self):
from pymemcache.client.hash import HashClient
client = HashClient(
[], use_pooling=True, ignore_exc=True, timeout=1, connect_timeout=1
)
result = client.get_many(["foo", "bar"])
assert result == {}
def test_ignore_exec_set_many(self):
values = {"key1": "value1", "key2": "value2", "key3": "value3"}
with pytest.raises(MemcacheUnknownError):
client = self.make_client(
*[
[b"STORED\r\n", b"UNKNOWN\r\n", b"STORED\r\n"],
[b"STORED\r\n", b"UNKNOWN\r\n", b"STORED\r\n"],
]
)
client.set_many(values, noreply=False)
client = self.make_client(
*[
[b"STORED\r\n", b"UNKNOWN\r\n", b"STORED\r\n"],
],
ignore_exc=True,
)
result = client.set_many(values, noreply=False)
assert len(result) == 0
def test_noreply_set_many(self):
values = {"key1": "value1", "key2": "value2", "key3": "value3"}
client = self.make_client(
*[
[b"STORED\r\n", b"NOT_STORED\r\n", b"STORED\r\n"],
]
)
result = client.set_many(values, noreply=False)
assert len(result) == 1
client = self.make_client(
*[
[b"STORED\r\n", b"NOT_STORED\r\n", b"STORED\r\n"],
]
)
result = client.set_many(values, noreply=True)
assert result == []
def test_noreply_flush(self):
client = self.make_client()
client.flush_all(noreply=True)
def test_set_many_unix(self):
values = {"key1": "value1", "key2": "value2", "key3": "value3"}
pid = os.getpid()
sockets = ["/tmp/pymemcache.%d" % pid]
client = self.make_unix_client(
sockets,
*[
[b"STORED\r\n", b"NOT_STORED\r\n", b"STORED\r\n"],
],
)
result = client.set_many(values, noreply=False)
assert len(result) == 1
def test_server_encoding_pooled(self):
"""
test passed encoding from hash client to pooled clients
"""
encoding = "utf8"
from pymemcache.client.hash import HashClient
hash_client = HashClient(
[("example.com", 11211)], use_pooling=True, encoding=encoding
)
for client in hash_client.clients.values():
assert client.encoding == encoding
def test_server_encoding_client(self):
"""
test passed encoding from hash client to clients
"""
encoding = "utf8"
from pymemcache.client.hash import HashClient
hash_client = HashClient([("example.com", 11211)], encoding=encoding)
for client in hash_client.clients.values():
assert client.encoding == encoding
@mock.patch("pymemcache.client.hash.HashClient.client_class")
def test_dead_server_comes_back(self, client_patch):
client = HashClient([], dead_timeout=0, retry_attempts=0)
client.add_server(("127.0.0.1", 11211))
test_client = client_patch.return_value
test_client.server = ("127.0.0.1", 11211)
test_client.get.side_effect = socket.timeout()
with pytest.raises(socket.timeout):
client.get(b"key", noreply=False)
# Client gets removed because of socket timeout
assert ("127.0.0.1", 11211) in client._dead_clients
test_client.get.side_effect = lambda *_, **_kw: "Some value"
# Client should be retried and brought back
assert client.get(b"key") == "Some value"
assert ("127.0.0.1", 11211) not in client._dead_clients
@mock.patch("pymemcache.client.hash.HashClient.client_class")
def test_failed_is_retried(self, client_patch):
client = HashClient([], retry_attempts=1, retry_timeout=0)
client.add_server(("127.0.0.1", 11211))
assert client_patch.call_count == 1
test_client = client_patch.return_value
test_client.server = ("127.0.0.1", 11211)
test_client.get.side_effect = socket.timeout()
with pytest.raises(socket.timeout):
client.get(b"key", noreply=False)
test_client.get.side_effect = lambda *_, **_kw: "Some value"
assert client.get(b"key") == "Some value"
assert client_patch.call_count == 1
def test_custom_client(self):
class MyClient(Client):
pass
client = HashClient([])
client.client_class = MyClient
client.add_server(("host", 11211))
assert isinstance(client.clients["host:11211"], MyClient)
def test_custom_client_with_pooling(self):
class MyClient(Client):
pass
client = HashClient([], use_pooling=True)
client.client_class = MyClient
client.add_server(("host", 11211))
assert isinstance(client.clients["host:11211"], PooledClient)
pool = client.clients["host:11211"].client_pool
with pool.get_and_release(destroy_on_fail=True) as c:
assert isinstance(c, MyClient)
def test_mixed_inet_and_unix_sockets(self):
expected = {
f"/tmp/pymemcache.{os.getpid()}",
("127.0.0.1", 11211),
("::1", 11211),
}
client = HashClient(
[
f"/tmp/pymemcache.{os.getpid()}",
"127.0.0.1",
"127.0.0.1:11211",
"[::1]",
"[::1]:11211",
("127.0.0.1", 11211),
("::1", 11211),
]
)
assert expected == {c.server for c in client.clients.values()}
def test_legacy_add_remove_server_signature(self):
server = ("127.0.0.1", 11211)
client = HashClient([])
assert client.clients == {}
client.add_server(*server) # Unpack (host, port) tuple.
assert ("%s:%s" % server) in client.clients
client._mark_failed_server(server)
assert server in client._failed_clients
client.remove_server(*server) # Unpack (host, port) tuple.
assert server in client._dead_clients
assert server not in client._failed_clients
# Ensure that server is a string if passing port argument:
with pytest.raises(TypeError):
client.add_server(server, server[-1])
with pytest.raises(TypeError):
client.remove_server(server, server[-1])
# TODO: Test failover logic

View File

@ -0,0 +1,286 @@
""" Test collection for the RetryingClient. """
import functools
import unittest
from unittest import mock
import pytest
from .test_client import ClientTestMixin, MockSocket
from pymemcache.client.retrying import RetryingClient
from pymemcache.client.base import Client
from pymemcache.exceptions import MemcacheUnknownError, MemcacheClientError
# Test pure passthroughs with no retry action.
class TestRetryingClientPassthrough(ClientTestMixin, unittest.TestCase):
def make_base_client(self, mock_socket_values, **kwargs):
base_client = Client("localhost", **kwargs)
# mock out client._connect() rather than hard-setting client.sock to
# ensure methods are checking whether self.sock is None before
# attempting to use it
sock = MockSocket(list(mock_socket_values))
base_client._connect = mock.Mock(
side_effect=functools.partial(setattr, base_client, "sock", sock)
)
return base_client
def make_client(self, mock_socket_values, **kwargs):
# Create a base client to wrap.
base_client = self.make_base_client(
mock_socket_values=mock_socket_values, **kwargs
)
# Wrap the client in the retrying class, disable retries.
client = RetryingClient(base_client, attempts=1)
return client
# Retry specific tests.
@pytest.mark.unit()
class TestRetryingClient(object):
def make_base_client(self, mock_socket_values, **kwargs):
"""Creates a regular mock client to wrap in the RetryClient."""
base_client = Client("localhost", **kwargs)
# mock out client._connect() rather than hard-setting client.sock to
# ensure methods are checking whether self.sock is None before
# attempting to use it
sock = MockSocket(list(mock_socket_values))
base_client._connect = mock.Mock(
side_effect=functools.partial(setattr, base_client, "sock", sock)
)
return base_client
def make_client(self, mock_socket_values, **kwargs):
"""
Creates a RetryingClient that will respond with the given values,
configured using kwargs.
"""
# Create a base client to wrap.
base_client = self.make_base_client(mock_socket_values=mock_socket_values)
# Wrap the client in the retrying class, and pass kwargs on.
client = RetryingClient(base_client, **kwargs)
return client
# Start testing.
def test_constructor_default(self):
base_client = self.make_base_client([])
RetryingClient(base_client)
with pytest.raises(TypeError):
RetryingClient()
def test_constructor_attempts(self):
base_client = self.make_base_client([])
rc = RetryingClient(base_client, attempts=1)
assert rc._attempts == 1
with pytest.raises(ValueError):
RetryingClient(base_client, attempts=0)
def test_constructor_retry_for(self):
base_client = self.make_base_client([])
# Try none/default.
rc = RetryingClient(base_client, retry_for=None)
assert rc._retry_for == tuple()
# Try with tuple.
rc = RetryingClient(base_client, retry_for=tuple([Exception]))
assert rc._retry_for == tuple([Exception])
# Try with list.
rc = RetryingClient(base_client, retry_for=[Exception])
assert rc._retry_for == tuple([Exception])
# Try with multi element list.
rc = RetryingClient(base_client, retry_for=[Exception, IOError])
assert rc._retry_for == (Exception, IOError)
# With string?
with pytest.raises(ValueError):
RetryingClient(base_client, retry_for="haha!")
# With collection of string and exceptions?
with pytest.raises(ValueError):
RetryingClient(base_client, retry_for=[Exception, str])
def test_constructor_do_no_retry_for(self):
base_client = self.make_base_client([])
# Try none/default.
rc = RetryingClient(base_client, do_not_retry_for=None)
assert rc._do_not_retry_for == tuple()
# Try with tuple.
rc = RetryingClient(base_client, do_not_retry_for=tuple([Exception]))
assert rc._do_not_retry_for == tuple([Exception])
# Try with list.
rc = RetryingClient(base_client, do_not_retry_for=[Exception])
assert rc._do_not_retry_for == tuple([Exception])
# Try with multi element list.
rc = RetryingClient(base_client, do_not_retry_for=[Exception, IOError])
assert rc._do_not_retry_for == (Exception, IOError)
# With string?
with pytest.raises(ValueError):
RetryingClient(base_client, do_not_retry_for="haha!")
# With collection of string and exceptions?
with pytest.raises(ValueError):
RetryingClient(base_client, do_not_retry_for=[Exception, str])
def test_constructor_both_filters(self):
base_client = self.make_base_client([])
# Try none/default.
rc = RetryingClient(base_client, retry_for=None, do_not_retry_for=None)
assert rc._retry_for == tuple()
assert rc._do_not_retry_for == tuple()
# Try a valid config.
rc = RetryingClient(
base_client,
retry_for=[Exception, IOError],
do_not_retry_for=[ValueError, MemcacheUnknownError],
)
assert rc._retry_for == (Exception, IOError)
assert rc._do_not_retry_for == (ValueError, MemcacheUnknownError)
# Try with overlapping filters
with pytest.raises(ValueError):
rc = RetryingClient(
base_client,
retry_for=[Exception, IOError, MemcacheUnknownError],
do_not_retry_for=[ValueError, MemcacheUnknownError],
)
def test_dir_passthrough(self):
base = self.make_base_client([])
client = RetryingClient(base)
assert dir(base) == dir(client)
def test_retry_dict_set_is_supported(self):
client = self.make_client([b"UNKNOWN\r\n", b"STORED\r\n"])
client[b"key"] = b"value"
def test_retry_dict_get_is_supported(self):
client = self.make_client(
[b"UNKNOWN\r\n", b"VALUE key 0 5\r\nvalue\r\nEND\r\n"]
)
assert client[b"key"] == b"value"
def test_retry_dict_get_not_found_is_supported(self):
client = self.make_client([b"UNKNOWN\r\n", b"END\r\n"])
with pytest.raises(KeyError):
client[b"key"]
def test_retry_dict_del_is_supported(self):
client = self.make_client([b"UNKNOWN\r\n", b"DELETED\r\n"])
del client[b"key"]
def test_retry_get_found(self):
client = self.make_client(
[b"UNKNOWN\r\n", b"VALUE key 0 5\r\nvalue\r\nEND\r\n"], attempts=2
)
result = client.get("key")
assert result == b"value"
def test_retry_get_not_found(self):
client = self.make_client([b"UNKNOWN\r\n", b"END\r\n"], attempts=2)
result = client.get("key")
assert result is None
def test_retry_get_exception(self):
client = self.make_client([b"UNKNOWN\r\n", b"UNKNOWN\r\n"], attempts=2)
with pytest.raises(MemcacheUnknownError):
client.get("key")
def test_retry_set_success(self):
client = self.make_client([b"UNKNOWN\r\n", b"STORED\r\n"], attempts=2)
result = client.set("key", "value", noreply=False)
assert result is True
def test_retry_set_fail(self):
client = self.make_client(
[b"UNKNOWN\r\n", b"UNKNOWN\r\n", b"STORED\r\n"], attempts=2
)
with pytest.raises(MemcacheUnknownError):
client.set("key", "value", noreply=False)
def test_no_retry(self):
client = self.make_client(
[b"UNKNOWN\r\n", b"VALUE key 0 5\r\nvalue\r\nEND\r\n"], attempts=1
)
with pytest.raises(MemcacheUnknownError):
client.get("key")
def test_retry_for_exception_success(self):
# Test that we retry for the exception specified.
client = self.make_client(
[MemcacheClientError("Whoops."), b"VALUE key 0 5\r\nvalue\r\nEND\r\n"],
attempts=2,
retry_for=tuple([MemcacheClientError]),
)
result = client.get("key")
assert result == b"value"
def test_retry_for_exception_fail(self):
# Test that we do not retry for unapproved exception.
client = self.make_client(
[MemcacheUnknownError("Whoops."), b"VALUE key 0 5\r\nvalue\r\nEND\r\n"],
attempts=2,
retry_for=tuple([MemcacheClientError]),
)
with pytest.raises(MemcacheUnknownError):
client.get("key")
def test_do_not_retry_for_exception_success(self):
# Test that we retry for exceptions not specified.
client = self.make_client(
[MemcacheClientError("Whoops."), b"VALUE key 0 5\r\nvalue\r\nEND\r\n"],
attempts=2,
do_not_retry_for=tuple([MemcacheUnknownError]),
)
result = client.get("key")
assert result == b"value"
def test_do_not_retry_for_exception_fail(self):
# Test that we do not retry for the exception specified.
client = self.make_client(
[MemcacheClientError("Whoops."), b"VALUE key 0 5\r\nvalue\r\nEND\r\n"],
attempts=2,
do_not_retry_for=tuple([MemcacheClientError]),
)
with pytest.raises(MemcacheClientError):
client.get("key")
def test_both_exception_filters(self):
# Test interaction between both exception filters.
client = self.make_client(
[
MemcacheClientError("Whoops."),
b"VALUE key 0 5\r\nvalue\r\nEND\r\n",
MemcacheUnknownError("Whoops."),
b"VALUE key 0 5\r\nvalue\r\nEND\r\n",
],
attempts=2,
retry_for=tuple([MemcacheClientError]),
do_not_retry_for=tuple([MemcacheUnknownError]),
)
# Check that we succeed where allowed.
result = client.get("key")
assert result == b"value"
# Check that no retries are attempted for the banned exception.
with pytest.raises(MemcacheUnknownError):
client.get("key")

View File

@ -0,0 +1,220 @@
from pymemcache.client.base import Client
from pymemcache.serde import (
CompressedSerde,
pickle_serde,
)
from faker import Faker
import pytest
import random
import string
import time
import zstd # type: ignore
import zlib
fake = Faker(["it_IT", "en_US", "ja_JP"])
def get_random_string(length):
letters = string.ascii_letters
chars = string.punctuation
digits = string.digits
total = letters + chars + digits
result_str = "".join(random.choice(total) for i in range(length))
return result_str
class CustomObject:
"""
Custom class for verifying serialization
"""
def __init__(self):
self.number = random.randint(0, 100)
self.string = fake.text()
self.object = fake.profile()
class CustomObjectValue:
def __init__(self, value):
self.value = value
def benchmark(count, func, *args, **kwargs):
start = time.time()
for _ in range(count):
result = func(*args, **kwargs)
duration = time.time() - start
print(str(duration))
return result
@pytest.fixture(scope="session")
def names():
names = []
for _ in range(15):
names.append(fake.name())
return names
@pytest.fixture(scope="session")
def paragraphs():
paragraphs = []
for _ in range(15):
paragraphs.append(fake.text())
return paragraphs
@pytest.fixture(scope="session")
def objects():
objects = []
for _ in range(15):
objects.append(CustomObject())
return objects
# Always run compression for the benchmarks
min_compress_len = 1
default_serde = CompressedSerde(min_compress_len=min_compress_len)
zlib_serde = CompressedSerde(
compress=lambda value: zlib.compress(value, 9),
decompress=lambda value: zlib.decompress(value),
min_compress_len=min_compress_len,
)
zstd_serde = CompressedSerde(
compress=lambda value: zstd.compress(value),
decompress=lambda value: zstd.decompress(value),
min_compress_len=min_compress_len,
)
serializers = [
None,
default_serde,
zlib_serde,
zstd_serde,
]
ids = ["none", "zlib ", "zlib9", "zstd "]
@pytest.mark.benchmark()
@pytest.mark.parametrize("serde", serializers, ids=ids)
def test_bench_compress_set_strings(count, host, port, serde, names):
client = Client((host, port), serde=serde, encoding="utf-8")
def test():
for index, name in enumerate(names):
key = f"name_{index}"
client.set(key, name)
benchmark(count, test)
@pytest.mark.benchmark()
@pytest.mark.parametrize("serde", serializers, ids=ids)
def test_bench_compress_get_strings(count, host, port, serde, names):
client = Client((host, port), serde=serde, encoding="utf-8")
for index, name in enumerate(names):
key = f"name_{index}"
client.set(key, name)
def test():
for index, _ in enumerate(names):
key = f"name_{index}"
client.get(key)
benchmark(count, test)
@pytest.mark.benchmark()
@pytest.mark.parametrize("serde", serializers, ids=ids)
def test_bench_compress_set_large_strings(count, host, port, serde, paragraphs):
client = Client((host, port), serde=serde, encoding="utf-8")
def test():
for index, p in enumerate(paragraphs):
key = f"paragraph_{index}"
client.set(key, p)
benchmark(count, test)
@pytest.mark.benchmark()
@pytest.mark.parametrize("serde", serializers, ids=ids)
def test_bench_compress_get_large_strings(count, host, port, serde, paragraphs):
client = Client((host, port), serde=serde, encoding="utf-8")
for index, p in enumerate(paragraphs):
key = f"paragraphs_{index}"
client.set(key, p)
def test():
for index, _ in enumerate(paragraphs):
key = f"paragraphs_{index}"
client.get(key)
benchmark(count, test)
@pytest.mark.benchmark()
@pytest.mark.parametrize("serde", serializers, ids=ids)
def test_bench_compress_set_objects(count, host, port, serde, objects):
client = Client((host, port), serde=serde, encoding="utf-8")
def test():
for index, o in enumerate(objects):
key = f"objects_{index}"
client.set(key, o)
benchmark(count, test)
@pytest.mark.benchmark()
@pytest.mark.parametrize("serde", serializers, ids=ids)
def test_bench_compress_get_objects(count, host, port, serde, objects):
client = Client((host, port), serde=serde, encoding="utf-8")
for index, o in enumerate(objects):
key = f"objects_{index}"
client.set(key, o)
def test():
for index, _ in enumerate(objects):
key = f"objects_{index}"
client.get(key)
benchmark(count, test)
@pytest.mark.benchmark()
def test_optimal_compression_length():
for length in range(5, 2000):
input_data = get_random_string(length)
start = len(input_data)
for index, serializer in enumerate(serializers[1:]):
name = ids[index + 1]
value, _ = serializer.serialize("foo", input_data)
end = len(value)
print(f"serializer={name}\t start={start}\t end={end}")
@pytest.mark.benchmark()
def test_optimal_compression_length_objects():
for length in range(5, 2000):
input_data = get_random_string(length)
obj = CustomObjectValue(input_data)
start = len(pickle_serde.serialize("foo", obj)[0])
for index, serializer in enumerate(serializers[1:]):
name = ids[index + 1]
value, _ = serializer.serialize("foo", obj)
end = len(value)
print(f"serializer={name}\t start={start}\t end={end}")

View File

@ -0,0 +1,441 @@
# Copyright 2012 Pinterest.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from collections import defaultdict
import pytest
from pymemcache.client.base import Client
from pymemcache.exceptions import (
MemcacheClientError,
MemcacheIllegalInputError,
MemcacheServerError,
)
from pymemcache.serde import PickleSerde, compressed_serde, pickle_serde
def get_set_helper(client, key, value, key2, value2):
result = client.get(key)
assert result is None
client.set(key, value, noreply=False)
result = client.get(key)
assert result == value
client.set(key2, value2, noreply=True)
result = client.get(key2)
assert result == value2
result = client.get_many([key, key2])
assert result == {key: value, key2: value2}
result = client.get_many([])
assert result == {}
@pytest.mark.integration()
@pytest.mark.parametrize(
"serde",
[
pickle_serde,
compressed_serde,
],
)
def test_get_set(client_class, host, port, serde, socket_module, key_prefix):
client = client_class(
(host, port), serde=serde, socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
key = b"key"
value = b"value"
key2 = b"key2"
value2 = b"value2"
get_set_helper(client, key, value, key2, value2)
@pytest.mark.integration()
@pytest.mark.parametrize(
"serde",
[
pickle_serde,
compressed_serde,
],
)
def test_get_set_unicode_key(
client_class, host, port, serde, socket_module, key_prefix
):
client = client_class(
(host, port),
serde=serde,
socket_module=socket_module,
allow_unicode_keys=True,
key_prefix=key_prefix,
)
client.flush_all()
key = "こんにちは"
value = b"hello"
key2 = "my☃"
value2 = b"value2"
get_set_helper(client, key, value, key2, value2)
@pytest.mark.integration()
@pytest.mark.parametrize(
"serde",
[
pickle_serde,
compressed_serde,
],
)
def test_add_replace(client_class, host, port, serde, socket_module, key_prefix):
client = client_class(
(host, port), serde=serde, socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
result = client.add(b"key", b"value", noreply=False)
assert result is True
result = client.get(b"key")
assert result == b"value"
result = client.add(b"key", b"value2", noreply=False)
assert result is False
result = client.get(b"key")
assert result == b"value"
result = client.replace(b"key1", b"value1", noreply=False)
assert result is False
result = client.get(b"key1")
assert result is None
result = client.replace(b"key", b"value2", noreply=False)
assert result is True
result = client.get(b"key")
assert result == b"value2"
@pytest.mark.integration()
def test_append_prepend(client_class, host, port, socket_module, key_prefix):
client = client_class(
(host, port), socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
result = client.append(b"key", b"value", noreply=False)
assert result is False
result = client.get(b"key")
assert result is None
result = client.set(b"key", b"value", noreply=False)
assert result is True
result = client.append(b"key", b"after", noreply=False)
assert result is True
result = client.get(b"key")
assert result == b"valueafter"
result = client.prepend(b"key1", b"value", noreply=False)
assert result is False
result = client.get(b"key1")
assert result is None
result = client.prepend(b"key", b"before", noreply=False)
assert result is True
result = client.get(b"key")
assert result == b"beforevalueafter"
@pytest.mark.integration()
def test_cas(client_class, host, port, socket_module, key_prefix):
client = client_class(
(host, port), socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
result = client.cas(b"key", b"value", b"1", noreply=False)
assert result is None
result = client.set(b"key", b"value", noreply=False)
assert result is True
# binary, string, and raw int all match -- should all be encoded as b'1'
result = client.cas(b"key", b"value", b"1", noreply=False)
assert result is False
result = client.cas(b"key", b"value", "1", noreply=False)
assert result is False
result = client.cas(b"key", b"value", 1, noreply=False)
assert result is False
result, cas = client.gets(b"key")
assert result == b"value"
result = client.cas(b"key", b"value1", cas, noreply=False)
assert result is True
result = client.cas(b"key", b"value2", cas, noreply=False)
assert result is False
@pytest.mark.integration()
def test_gets(client_class, host, port, socket_module, key_prefix):
client = client_class(
(host, port), socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
result = client.gets(b"key")
assert result == (None, None)
result = client.set(b"key", b"value", noreply=False)
assert result is True
result = client.gets(b"key")
assert result[0] == b"value"
@pytest.mark.integration()
def test_delete(client_class, host, port, socket_module, key_prefix):
client = client_class(
(host, port), socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
result = client.delete(b"key", noreply=False)
assert result is False
result = client.get(b"key")
assert result is None
result = client.set(b"key", b"value", noreply=False)
assert result is True
result = client.delete(b"key", noreply=False)
assert result is True
result = client.get(b"key")
assert result is None
@pytest.mark.integration()
def test_incr_decr(client_class, host, port, socket_module, key_prefix):
client = Client((host, port), socket_module=socket_module, key_prefix=key_prefix)
client.flush_all()
result = client.incr(b"key", 1, noreply=False)
assert result is None
result = client.set(b"key", b"0", noreply=False)
assert result is True
result = client.incr(b"key", 1, noreply=False)
assert result == 1
def _bad_int():
client.incr(b"key", b"foobar")
with pytest.raises(MemcacheClientError):
_bad_int()
result = client.decr(b"key1", 1, noreply=False)
assert result is None
result = client.decr(b"key", 1, noreply=False)
assert result == 0
result = client.get(b"key")
assert result == b"0"
@pytest.mark.integration()
def test_touch(client_class, host, port, socket_module, key_prefix):
client = client_class(
(host, port), socket_module=socket_module, key_prefix=key_prefix
)
client.flush_all()
result = client.touch(b"key", noreply=False)
assert result is False
result = client.set(b"key", b"0", 1, noreply=False)
assert result is True
result = client.touch(b"key", noreply=False)
assert result is True
result = client.touch(b"key", 1, noreply=False)
assert result is True
@pytest.mark.integration()
def test_misc(client_class, host, port, socket_module, key_prefix):
client = Client((host, port), socket_module=socket_module, key_prefix=key_prefix)
client.flush_all()
# Ensure no exceptions are thrown
client.stats("cachedump", "1", "1")
success = client.cache_memlimit(50)
assert success
@pytest.mark.integration()
def test_serialization_deserialization(host, port, socket_module):
class JsonSerde:
def serialize(self, key, value):
return json.dumps(value).encode("ascii"), 1
def deserialize(self, key, value, flags):
if flags == 1:
return json.loads(value.decode("ascii"))
return value
client = Client((host, port), serde=JsonSerde(), socket_module=socket_module)
client.flush_all()
value = {"a": "b", "c": ["d"]}
client.set(b"key", value)
result = client.get(b"key")
assert result == value
def serde_serialization_helper(client_class, host, port, socket_module, serde):
def check(value):
client.set(b"key", value, noreply=False)
result = client.get(b"key")
assert result == value
assert type(result) is type(value)
client = client_class((host, port), serde=serde, socket_module=socket_module)
client.flush_all()
check(b"byte string")
check("unicode string")
check("olé")
check("olé")
check(1)
check(123123123123123123123)
check({"a": "pickle"})
check(["one pickle", "two pickle"])
testdict = defaultdict(int)
testdict["one pickle"]
testdict[b"two pickle"]
check(testdict)
@pytest.mark.integration()
@pytest.mark.parametrize(
"serde",
[
pickle_serde,
compressed_serde,
],
)
def test_serde_serialization(client_class, host, port, socket_module, serde):
serde_serialization_helper(client_class, host, port, socket_module, serde)
@pytest.mark.integration()
def test_serde_serialization0(client_class, host, port, socket_module):
serde_serialization_helper(
client_class, host, port, socket_module, PickleSerde(pickle_version=0)
)
@pytest.mark.integration()
def test_serde_serialization2(client_class, host, port, socket_module):
serde_serialization_helper(
client_class, host, port, socket_module, PickleSerde(pickle_version=2)
)
@pytest.mark.integration()
def test_errors(client_class, host, port, socket_module):
client = client_class((host, port), socket_module=socket_module)
client.flush_all()
def _key_with_ws():
client.set(b"key with spaces", b"value", noreply=False)
with pytest.raises(MemcacheIllegalInputError):
_key_with_ws()
def _key_with_illegal_carriage_return():
client.set(b"\r\nflush_all", b"value", noreply=False)
with pytest.raises(MemcacheIllegalInputError):
_key_with_illegal_carriage_return()
def _key_too_long():
client.set(b"x" * 1024, b"value", noreply=False)
with pytest.raises(MemcacheClientError):
_key_too_long()
def _unicode_key_in_set():
client.set("\u0FFF", b"value", noreply=False)
with pytest.raises(MemcacheClientError):
_unicode_key_in_set()
def _unicode_key_in_get():
client.get("\u0FFF")
with pytest.raises(MemcacheClientError):
_unicode_key_in_get()
def _unicode_value_in_set():
client.set(b"key", "\u0FFF", noreply=False)
with pytest.raises(MemcacheClientError):
_unicode_value_in_set()
@pytest.mark.skip("https://github.com/pinterest/pymemcache/issues/39")
@pytest.mark.integration()
def test_tls(client_class, tls_host, tls_port, socket_module, tls_context):
client = client_class(
(tls_host, tls_port), socket_module=socket_module, tls_context=tls_context
)
client.flush_all()
key = b"key"
value = b"value"
key2 = b"key2"
value2 = b"value2"
get_set_helper(client, key, value, key2, value2)
@pytest.mark.integration()
@pytest.mark.parametrize(
"serde,should_fail",
[
(pickle_serde, True),
(compressed_serde, False),
],
)
def test_get_set_large(
client_class,
host,
port,
serde,
socket_module,
should_fail,
):
client = client_class((host, port), serde=serde, socket_module=socket_module)
client.flush_all()
key = b"key"
value = b"value" * 1024 * 1024
key2 = b"key2"
value2 = b"value2" * 1024 * 1024
if should_fail:
with pytest.raises(MemcacheServerError):
get_set_helper(client, key, value, key2, value2)
else:
get_set_helper(client, key, value, key2, value2)

View File

@ -0,0 +1,203 @@
from pymemcache.client.rendezvous import RendezvousHash
import pytest
@pytest.mark.unit()
def test_init_no_options():
rendezvous = RendezvousHash()
assert 0 == len(rendezvous.nodes)
assert 1361238019 == rendezvous.hash_function("6666")
@pytest.mark.unit()
def test_init():
nodes = ["0", "1", "2"]
rendezvous = RendezvousHash(nodes=nodes)
assert 3 == len(rendezvous.nodes)
assert 1361238019 == rendezvous.hash_function("6666")
@pytest.mark.unit()
def test_seed():
rendezvous = RendezvousHash(seed=10)
assert 2981722772 == rendezvous.hash_function("6666")
@pytest.mark.unit()
def test_add_node():
rendezvous = RendezvousHash()
rendezvous.add_node("1")
assert 1 == len(rendezvous.nodes)
rendezvous.add_node("1")
assert 1 == len(rendezvous.nodes)
rendezvous.add_node("2")
assert 2 == len(rendezvous.nodes)
rendezvous.add_node("1")
assert 2 == len(rendezvous.nodes)
@pytest.mark.unit()
def test_remove_node():
nodes = ["0", "1", "2"]
rendezvous = RendezvousHash(nodes=nodes)
rendezvous.remove_node("2")
assert 2 == len(rendezvous.nodes)
with pytest.raises(ValueError):
rendezvous.remove_node("2")
assert 2 == len(rendezvous.nodes)
rendezvous.remove_node("1")
assert 1 == len(rendezvous.nodes)
rendezvous.remove_node("0")
assert 0 == len(rendezvous.nodes)
@pytest.mark.unit()
def test_get_node():
nodes = ["0", "1", "2"]
rendezvous = RendezvousHash(nodes=nodes)
assert "0" == rendezvous.get_node("ok")
assert "1" == rendezvous.get_node("mykey")
assert "2" == rendezvous.get_node("wat")
@pytest.mark.unit()
def test_get_node_after_removal():
nodes = ["0", "1", "2"]
rendezvous = RendezvousHash(nodes=nodes)
rendezvous.remove_node("1")
assert "0" == rendezvous.get_node("ok")
assert "0" == rendezvous.get_node("mykey")
assert "2" == rendezvous.get_node("wat")
@pytest.mark.unit()
def test_get_node_after_addition():
nodes = ["0", "1", "2"]
rendezvous = RendezvousHash(nodes=nodes)
assert "0" == rendezvous.get_node("ok")
assert "1" == rendezvous.get_node("mykey")
assert "2" == rendezvous.get_node("wat")
assert "2" == rendezvous.get_node("lol")
rendezvous.add_node("3")
assert "0" == rendezvous.get_node("ok")
assert "1" == rendezvous.get_node("mykey")
assert "2" == rendezvous.get_node("wat")
assert "3" == rendezvous.get_node("lol")
@pytest.mark.unit()
def test_grow():
rendezvous = RendezvousHash()
placements = {}
for i in range(10):
rendezvous.add_node(str(i))
placements[str(i)] = []
for i in range(1000):
node = rendezvous.get_node(str(i))
placements[node].append(i)
new_placements = {}
for i in range(20):
rendezvous.add_node(str(i))
new_placements[str(i)] = []
for i in range(1000):
node = rendezvous.get_node(str(i))
new_placements[node].append(i)
keys = [k for sublist in placements.values() for k in sublist]
new_keys = [k for sublist in new_placements.values() for k in sublist]
assert sorted(keys) == sorted(new_keys)
added = 0
removed = 0
for node, assignments in new_placements.items():
after = set(assignments)
before = set(placements.get(node, []))
removed += len(before.difference(after))
added += len(after.difference(before))
assert added == removed
assert 1062 == (added + removed)
@pytest.mark.unit()
def test_shrink():
rendezvous = RendezvousHash()
placements = {}
for i in range(10):
rendezvous.add_node(str(i))
placements[str(i)] = []
for i in range(1000):
node = rendezvous.get_node(str(i))
placements[node].append(i)
rendezvous.remove_node("9")
new_placements = {}
for i in range(9):
new_placements[str(i)] = []
for i in range(1000):
node = rendezvous.get_node(str(i))
new_placements[node].append(i)
keys = [k for sublist in placements.values() for k in sublist]
new_keys = [k for sublist in new_placements.values() for k in sublist]
assert sorted(keys) == sorted(new_keys)
added = 0
removed = 0
for node, assignments in placements.items():
after = set(assignments)
before = set(new_placements.get(node, []))
removed += len(before.difference(after))
added += len(after.difference(before))
assert added == removed
assert 202 == (added + removed)
def collide(key, seed):
return 1337
@pytest.mark.unit()
def test_rendezvous_collision():
nodes = ["c", "b", "a"]
rendezvous = RendezvousHash(nodes, hash_function=collide)
for i in range(1000):
assert "c" == rendezvous.get_node(i)
@pytest.mark.unit()
def test_rendezvous_names():
nodes = [1, 2, 3, "a", "b", "lol.wat.com"]
rendezvous = RendezvousHash(nodes, hash_function=collide)
for i in range(10):
assert "lol.wat.com" == rendezvous.get_node(i)
nodes = [1, "a", "0"]
rendezvous = RendezvousHash(nodes, hash_function=collide)
for i in range(10):
assert "a" == rendezvous.get_node(i)

View File

@ -0,0 +1,147 @@
from unittest import TestCase
from pymemcache.serde import (
CompressedSerde,
pickle_serde,
PickleSerde,
FLAG_BYTES,
FLAG_COMPRESSED,
FLAG_PICKLE,
FLAG_INTEGER,
FLAG_TEXT,
)
import pytest
import pickle
import sys
import zlib
class CustomInt(int):
"""
Custom integer type for testing.
Entirely useless, but used to show that built in types get serialized and
deserialized back as the same type of object.
"""
pass
def check(serde, value, expected_flags):
serialized, flags = serde.serialize(b"key", value)
assert flags == expected_flags
# pymemcache stores values as byte strings, so we immediately the value
# if needed so deserialized works as it would with a real server
if not isinstance(serialized, bytes):
serialized = str(serialized).encode("ascii")
deserialized = serde.deserialize(b"key", serialized, flags)
assert deserialized == value
@pytest.mark.unit()
class TestSerde:
serde = pickle_serde
def test_bytes(self):
check(self.serde, b"value", FLAG_BYTES)
check(self.serde, b"\xc2\xa3 $ \xe2\x82\xac", FLAG_BYTES) # £ $ €
def test_unicode(self):
check(self.serde, "value", FLAG_TEXT)
check(self.serde, "£ $ €", FLAG_TEXT)
def test_int(self):
check(self.serde, 1, FLAG_INTEGER)
def test_pickleable(self):
check(self.serde, {"a": "dict"}, FLAG_PICKLE)
def test_subtype(self):
# Subclass of a native type will be restored as the same type
check(self.serde, CustomInt(123123), FLAG_PICKLE)
@pytest.mark.unit()
class TestSerdePickleVersion0(TestCase):
serde = PickleSerde(pickle_version=0)
@pytest.mark.unit()
class TestSerdePickleVersion1(TestCase):
serde = PickleSerde(pickle_version=1)
@pytest.mark.unit()
class TestSerdePickleVersion2(TestCase):
serde = PickleSerde(pickle_version=2)
@pytest.mark.unit()
class TestSerdePickleVersionHighest(TestCase):
serde = PickleSerde(pickle_version=pickle.HIGHEST_PROTOCOL)
@pytest.mark.parametrize("serde", [pickle_serde, CompressedSerde()])
@pytest.mark.unit()
def test_compressed_simple(serde):
# test_bytes
check(serde, b"value", FLAG_BYTES)
check(serde, b"\xc2\xa3 $ \xe2\x82\xac", FLAG_BYTES) # £ $ €
# test_unicode
check(serde, "value", FLAG_TEXT)
check(serde, "£ $ €", FLAG_TEXT)
# test_int
check(serde, 1, FLAG_INTEGER)
# test_pickleable
check(serde, {"a": "dict"}, FLAG_PICKLE)
# test_subtype
# Subclass of a native type will be restored as the same type
check(serde, CustomInt(12312), FLAG_PICKLE)
@pytest.mark.parametrize(
"serde",
[
CompressedSerde(min_compress_len=49),
# Custom compression. This could be something like lz4
CompressedSerde(
compress=lambda value: zlib.compress(value, 9),
decompress=lambda value: zlib.decompress(value),
min_compress_len=49,
),
],
)
@pytest.mark.unit()
def test_compressed_complex(serde):
# test_bytes
check(serde, b"value" * 10, FLAG_BYTES | FLAG_COMPRESSED)
check(serde, b"\xc2\xa3 $ \xe2\x82\xac" * 10, FLAG_BYTES | FLAG_COMPRESSED) # £ $ €
# test_unicode
check(serde, "value" * 10, FLAG_TEXT | FLAG_COMPRESSED)
check(serde, "£ $ €" * 10, FLAG_TEXT | FLAG_COMPRESSED)
# test_int, doesn't make sense to compress
check(serde, sys.maxsize, FLAG_INTEGER)
# test_pickleable
check(
serde,
{
"foo": "bar",
"baz": "qux",
"uno": "dos",
"tres": "tres",
},
FLAG_PICKLE | FLAG_COMPRESSED,
)
# test_subtype
# Subclass of a native type will be restored as the same type
check(serde, CustomInt(sys.maxsize), FLAG_PICKLE | FLAG_COMPRESSED)

View File

@ -0,0 +1,113 @@
import pytest
from pymemcache.test.utils import MockMemcacheClient
@pytest.mark.unit()
def test_get_set():
client = MockMemcacheClient()
assert client.get(b"hello") is None
client.set(b"hello", 12)
assert client.get(b"hello") == 12
@pytest.mark.unit()
def test_get_set_unicide_key():
client = MockMemcacheClient()
assert client.get("hello") is None
client.set(b"hello", 12)
assert client.get("hello") == 12
@pytest.mark.unit()
def test_get_set_non_ascii_value():
client = MockMemcacheClient()
assert client.get(b"hello") is None
# This is the value of msgpack.packb('non_ascii')
non_ascii_str = b"\xa9non_ascii"
client.set(b"hello", non_ascii_str)
assert client.get(b"hello") == non_ascii_str
@pytest.mark.unit()
def test_get_many_set_many():
client = MockMemcacheClient()
client.set(b"h", 1)
result = client.get_many([b"h", b"e", b"l", b"o"])
assert result == {b"h": 1}
# Convert keys into bytes
d = {k.encode("ascii"): v for k, v in dict(h=1, e=2, z=3).items()}
client.set_many(d)
assert client.get_many([b"h", b"e", b"z", b"o"]) == d
@pytest.mark.unit()
def test_get_many_set_many_non_ascii_values():
client = MockMemcacheClient()
# These are the values of calling msgpack.packb() on '1', '2', and '3'
non_ascii_1 = b"\xa11"
non_ascii_2 = b"\xa12"
non_ascii_3 = b"\xa13"
client.set(b"h", non_ascii_1)
result = client.get_many([b"h", b"e", b"l", b"o"])
assert result == {b"h": non_ascii_1}
# Convert keys into bytes
d = {
k.encode("ascii"): v
for k, v in dict(h=non_ascii_1, e=non_ascii_2, z=non_ascii_3).items()
}
client.set_many(d)
assert client.get_many([b"h", b"e", b"z", b"o"]) == d
@pytest.mark.unit()
def test_add():
client = MockMemcacheClient()
client.add(b"k", 2)
assert client.get(b"k") == 2
client.add(b"k", 25)
assert client.get(b"k") == 2
@pytest.mark.unit()
def test_delete():
client = MockMemcacheClient()
client.add(b"k", 2)
assert client.get(b"k") == 2
client.delete(b"k")
assert client.get(b"k") is None
@pytest.mark.unit()
def test_incr_decr():
client = MockMemcacheClient()
client.add(b"k", 2)
client.incr(b"k", 4)
assert client.get(b"k") == 6
client.decr(b"k", 2)
assert client.get(b"k") == 4
@pytest.mark.unit()
def test_prepand_append():
client = MockMemcacheClient()
client.set(b"k", "1")
client.append(b"k", "a")
client.prepend(b"k", "p")
assert client.get(b"k") == b"p1a"

View File

@ -0,0 +1,223 @@
"""
Useful testing utilities.
This module is considered public API.
"""
import time
import socket
from pymemcache.exceptions import MemcacheClientError, MemcacheIllegalInputError
from pymemcache.serde import LegacyWrappingSerde
from pymemcache.client.base import check_key_helper
class MockMemcacheClient:
"""
A (partial) in-memory mock for Clients.
"""
def __init__(
self,
server=None,
serde=None,
serializer=None,
deserializer=None,
connect_timeout=None,
timeout=None,
no_delay=False,
ignore_exc=False,
socket_module=None,
default_noreply=True,
allow_unicode_keys=False,
encoding="ascii",
tls_context=None,
):
self._contents = {}
self.serde = serde or LegacyWrappingSerde(serializer, deserializer)
self.allow_unicode_keys = allow_unicode_keys
# Unused, but present for interface compatibility
self.server = server
self.connect_timeout = connect_timeout
self.timeout = timeout
self.no_delay = no_delay
self.ignore_exc = ignore_exc
self.socket_module = socket
self.sock = None
self.encoding = encoding
self.tls_context = tls_context
def check_key(self, key):
"""Checks key and add key_prefix."""
return check_key_helper(key, allow_unicode_keys=self.allow_unicode_keys)
def clear(self):
"""Method used to clear/reset mock cache"""
self._contents.clear()
def get(self, key, default=None):
key = self.check_key(key)
if key not in self._contents:
return default
expire, value, flags = self._contents[key]
if expire and expire < time.time():
del self._contents[key]
return default
return self.serde.deserialize(key, value, flags)
def get_many(self, keys):
out = {}
for key in keys:
value = self.get(key)
if value is not None:
out[key] = value
return out
get_multi = get_many
def set(self, key, value, expire=0, noreply=True, flags=None):
key = self.check_key(key)
if isinstance(value, str) and not isinstance(value, bytes):
try:
value = value.encode(self.encoding)
except (UnicodeEncodeError, UnicodeDecodeError):
raise MemcacheIllegalInputError
value, flags = self.serde.serialize(key, value)
if expire:
expire += time.time()
self._contents[key] = expire, value, flags
return True
def set_many(self, values, expire=0, noreply=True, flags=None):
result = []
for key, value in values.items():
ret = self.set(key, value, expire, noreply, flags=flags)
if not ret:
result.append(key)
return [] if noreply else result
set_multi = set_many
def incr(self, key, value, noreply=False):
current = self.get(key)
present = current is not None
if present:
self.set(key, current + value, noreply=noreply)
return None if noreply or not present else current + value
def decr(self, key, value, noreply=False):
current = self.get(key)
present = current is not None
if present:
self.set(key, current - value, noreply=noreply)
return None if noreply or not present else current - value
def add(self, key, value, expire=0, noreply=True, flags=None):
current = self.get(key)
present = current is not None
if not present:
self.set(key, value, expire, noreply, flags=flags)
return noreply or not present
def delete(self, key, noreply=True):
key = self.check_key(key)
current = self._contents.pop(key, None)
present = current is not None
return noreply or present
def delete_many(self, keys, noreply=True):
for key in keys:
self.delete(key, noreply)
return True
def prepend(self, key, value, expire=0, noreply=True, flags=None):
current = self.get(key)
if current is not None:
if isinstance(value, str) and not isinstance(value, bytes):
try:
value = value.encode(self.encoding)
except (UnicodeEncodeError, UnicodeDecodeError):
raise MemcacheIllegalInputError
self.set(key, value + current, expire, noreply, flags=flags)
return True
def append(self, key, value, expire=0, noreply=True, flags=None):
current = self.get(key)
if current is not None:
if isinstance(value, str) and not isinstance(value, bytes):
try:
value = value.encode(self.encoding)
except (UnicodeEncodeError, UnicodeDecodeError):
raise MemcacheIllegalInputError
self.set(key, current + value, expire, noreply, flags=flags)
return True
delete_multi = delete_many
def stats(self, *_args):
# I make no claim that these values make any sense, but the format
# of the output is the same as for pymemcache.client.Client.stats()
return {
"version": "MockMemcacheClient",
"rusage_user": 1.0,
"rusage_system": 1.0,
"hash_is_expanding": False,
"slab_reassign_running": False,
"inter": "in-memory",
"evictions": False,
"growth_factor": 1.0,
"stat_key_prefix": "",
"umask": 0o644,
"detail_enabled": False,
"cas_enabled": False,
"auth_enabled_sasl": False,
"maxconns_fast": False,
"slab_reassign": False,
"slab_automove": False,
}
def replace(self, key, value, expire=0, noreply=True, flags=None):
current = self.get(key)
present = current is not None
if present:
self.set(key, value, expire, noreply, flags=flags)
return noreply or present
def cas(self, key, value, cas, expire=0, noreply=False, flags=None):
raise MemcacheClientError("CAS is not enabled for this instance")
def touch(self, key, expire=0, noreply=True):
current = self.get(key)
present = current is not None
if present:
self.set(key, current, expire, noreply=noreply)
return True if noreply or present else False
def cache_memlimit(self, memlimit):
return True
def version(self):
return "MockMemcacheClient"
def flush_all(self, delay=0, noreply=True):
self.clear()
return noreply or self._contents == {}
def quit(self):
pass
def close(self):
pass

View File

@ -15,6 +15,8 @@ debtcollector==2.5.0
decorator==5.1.1
Django==4.2.1
django-debug-toolbar==4.1.0
django-ranged-response==0.2.0
django-simple-captcha==0.5.20
dnspython==2.3.0
dogpile.cache==1.2.1
eventlet==0.33.3

View File

@ -16,6 +16,8 @@ Unser Angebot enthält Links zu externen Webseiten Dritter, auf deren Inhalte wi
<br><b>Andere Disclaimer</b><br>
Wir weisen darauf hin, dass die Datenübertragung im Internet (z.B. bei der Kommunikation per E-Mail) Sicherheitslücken aufweisen kann. Ein lückenloser Schutz der Daten vor dem Zugriff durch Dritte ist nicht möglich. <br>
Der Nutzung von im Rahmen der Impressumspflicht veröffentlichten Kontaktdaten durch Dritte zur Übersendung von nicht ausdrücklich angeforderter Werbung und Informationsmaterialien wird hiermit ausdrücklich widersprochen. Die Betreiber der Seiten behalten sich ausdrücklich rechtliche Schritte im Falle der unverlangten Zusendung von Werbeinformationen, etwa durch Spam-Mails, vor.<br>
<br><b>Quellcode</b><br>
Der Quellcode dieser Seite ist unter <a href="https://git.denkena-consulting.com/f-denkena/impuls">https://git.denkena-consulting.com/f-denkena/impuls</a> zu finden. Pull Requests und weitere Mithilfe ist stets willkommen. Gerne werden auch Ihre Fehlermeldungen schnellstmöglich bearbeitet.<br>
</div>
</section>
{% endblock content %}

Binary file not shown.