{"version":"1.1","schema_version":"1.1.0","plugin_version":"1.1.2","url":"https://tutos-gameserver.fr/2019/10/21/postgresql-documentation-sqlalchemy-1-3-bien-choisir-son-serveur-d-impression/","llm_html_url":"https://tutos-gameserver.fr/2019/10/21/postgresql-documentation-sqlalchemy-1-3-bien-choisir-son-serveur-d-impression/llm","llm_json_url":"https://tutos-gameserver.fr/2019/10/21/postgresql-documentation-sqlalchemy-1-3-bien-choisir-son-serveur-d-impression/llm.json","manifest_url":"https://tutos-gameserver.fr/llm-endpoints-manifest.json","language":"fr-FR","locale":"fr_FR","title":"PostgreSQL &#8211; Documentation SQLAlchemy 1.3\n\n &#8211; Bien choisir son serveur d impression","site":{"name":"Tutos GameServer","url":"https://tutos-gameserver.fr/"},"author":{"id":1,"name":"Titanfall","url":"https://tutos-gameserver.fr/author/titanfall/"},"published_at":"2019-10-21T14:53:53+00:00","modified_at":"2019-10-21T14:53:53+00:00","word_count":12906,"reading_time_seconds":3872,"summary":"Prise en charge de la base de données PostgreSQL. Prise en charge DBAPI Les options dialect / DBAPI suivantes sont disponibles. Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion. Séquences / SERIAL / IDENTITY PostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut de créer de nouvelles [&hellip;]","summary_points":["Prise en charge de la base de données PostgreSQL.","Prise en charge DBAPI\nLes options dialect / DBAPI suivantes sont disponibles.","Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion.","Séquences / SERIAL / IDENTITY\nPostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut\nde créer de nouvelles valeurs de clé primaire pour les colonnes de clé primaire basées sur des nombres entiers."],"topics":["Serveur d'impression"],"entities":[],"entities_metadata":[{"id":10,"name":"Serveur d'impression","slug":"serveur-dimpression","taxonomy":"category","count":3907,"url":"https://tutos-gameserver.fr/category/serveur-dimpression/"}],"tags":["Serveur d'impression"],"content_hash":"a39f5cae454205e07c3b1519c0dbfc11","plain_text":"Prise en charge de la base de données PostgreSQL.\nPrise en charge DBAPI\nLes options dialect / DBAPI suivantes sont disponibles. Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion.\n\n\n\nSéquences / SERIAL / IDENTITY\nPostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut\nde créer de nouvelles valeurs de clé primaire pour les colonnes de clé primaire basées sur des nombres entiers. Quand\ncréer des tables, SQLAlchemy va publier le EN SÉRIE    type de données pour\ncolonnes de clé primaire basées sur des nombres entiers, qui génèrent une séquence et un côté serveur\ndéfaut correspondant à la colonne.\nPour spécifier une séquence nommée spécifique à utiliser pour la génération de clé primaire,\nUtilisez le Séquence()    construction:\n\n\nTable(&#39;quelque chose&#39;, métadonnées,\n        Colonne(&#39;id&#39;, Entier, Séquence(&#39;some_id_seq&#39;), clé primaire=Vrai)\n    )\n\n\nLorsque SQLAlchemy émet une seule instruction INSERT, pour remplir le contrat de\nayant le &quot;dernier identifiant d&#39;insertion&quot; disponible, une clause RETURNING est ajoutée à\nl&#39;instruction INSERT qui spécifie les colonnes de clé primaire doit être\nretourné une fois la déclaration terminée. La fonctionnalité RETURNING ne prend que\nplace si PostgreSQL 8.2 ou version ultérieure est utilisé. Dans une approche de repli, le\nséquence, spécifiée explicitement ou implicitement via EN SÉRIE, est\npréalablement exécutée indépendamment, la valeur renvoyée à utiliser dans la\ninsertion ultérieure. Notez que lorsqu&#39;un\ninsérer()    la construction est exécutée en utilisant\nSémantique «executemany», la fonctionnalité «dernier identifiant inséré» ne\nappliquer; aucune clause RETURNING n’est émise et la séquence n’a pas été pré-exécutée dans cette\nCas.\nPour forcer l&#39;utilisation de RETURNING par défaut, spécifiez l&#39;indicateur.\nimplicit_returning = False    à create_engine ().\n\nColonnes PostgreSQL 10 IDENTITY\nPostgreSQL 10 a une nouvelle fonctionnalité IDENTITY qui remplace l’utilisation de SERIAL.\nLe support intégré pour le rendu de IDENTITY n’est pas encore disponible, mais le\nle crochet de compilation suivant peut être utilisé pour remplacer les occurrences de SERIAL par\nIDENTITÉ:\n\n\nde sqlalchemy.schema importation CreateColumn\nde sqlalchemy.ext.compiler importation compile\n\n\n@compiles(CreateColumn, &#39;postgresql&#39;)\ndef use_identity(élément, compilateur, **kw):\n    texte = compilateur.visit_create_column(élément, **kw)\n    texte = texte.remplacer(&quot;EN SÉRIE&quot;, &quot;INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ&quot;)\n    revenir texte\n\n\nEn utilisant ce qui précède, un tableau tel que:\n\n\nt = Table(\n    &#39;t&#39;, m,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, Chaîne)\n)\n\n\nGénérera sur la base de données de sauvegarde en tant que:\n\n\nCRÉER TABLE t (\n    identifiant INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ NE PAS NUL,\n    Les données VARCHAR,\n    PRIMAIRE CLÉ (identifiant)\n)\n\n\n\n\n\n\nNiveau d&#39;isolation de la transaction\nTous les dialectes PostgreSQL supportent la définition du niveau d&#39;isolation des transactions\nà la fois via un paramètre spécifique au dialecte\ncreate_engine.isolation_level    accepté par create_engine (),\naussi bien que Connection.execution_options.isolation_level\nargument passé à Connection.execution_options ().\nLors de l’utilisation d’un dialecte autre que psycopg2, cette fonction fonctionne en lançant la commande\nENSEMBLE SESSION LES CARACTÉRISTIQUES COMME TRANSACTION ISOLEMENT NIVEAU     pour\nchaque nouvelle connexion. Pour le niveau d&#39;isolement AUTOCOMMIT spécial,\nDes techniques spécifiques à DBAPI sont utilisées.\nPour définir le niveau d&#39;isolement à l&#39;aide de create_engine ():\n\n\nmoteur = create_engine(\n    &quot;postgresql + pg8000: // scott: tiger @ localhost / test&quot;,\n    niveau_isolement=&quot;READ UNCOMMITTED&quot;\n)\n\n\nPour définir à l&#39;aide des options d&#39;exécution par connexion:\n\n\nlien = moteur.relier()\nlien = lien.execution_options(\n    niveau_isolement=&quot;LIRE ENGAGÉ&quot;\n)\n\n\nValeurs valides pour niveau_isolement    comprendre:\n\n\n\nIntrospection de la table de schémas distants et chemin de recherche PostgreSQL\nTL; DR;: garder le chemin_recherche    variable définie à sa valeur par défaut de Publique,\nnommer des schémas autre que Publique    explicitement dans Table    définitions.\nLe dialecte PostgreSQL peut refléter les tables de n’importe quel schéma. le\nTable.schema    argument, ou bien la\nMetaData.reflect.schema    l&#39;argument détermine quel schéma sera\nêtre recherché pour la ou les tables. Le reflété Table    objets\nconservera dans tous les cas cette .schéma    attribut comme spécifié.\nCependant, en ce qui concerne les tableaux que ces Table    les objets font référence à\nvia une contrainte de clé étrangère, une décision doit être prise quant à la .schéma\nest représenté dans ces tables distantes, dans le cas où cette distance\nnom de schéma est également un membre du courant\nChemin de recherche PostgreSQL.\nPar défaut, le dialecte PostgreSQL reproduit le comportement encouragé par\nPostgreSQL propre pg_get_constraintdef ()    procédure intégrée. Cette fonction\nrenvoie un exemple de définition pour une contrainte de clé étrangère particulière,\nomettant le nom de schéma référencé de cette définition lorsque le nom est\négalement dans le chemin de recherche du schéma PostgreSQL. L&#39;interaction ci-dessous\nillustre ce comportement:\n\n\ntester=&gt; CRÉER TABLE test_schema.référé(identifiant ENTIER PRIMAIRE CLÉ)\nCRÉER TABLE\ntester=&gt; CRÉER TABLE référant(\ntester(&gt;         identifiant ENTIER PRIMAIRE CLÉ,\ntester(&gt;         id_référé ENTIER RÉFÉRENCES test_schema.référé(identifiant));\nCRÉER TABLE\ntester=&gt; ENSEMBLE chemin_recherche À Publique, test_schema;\ntester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;\ntester-&gt; ;\n               pg_get_constraintdef\n-------------------------------------------------- -\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES référé(identifiant)\n(1 rangée)\n\n\nCi-dessus, nous avons créé une table référé    en tant que membre du schéma distant\ntest_schemaCependant, lorsque nous avons ajouté test_schema    à la\nPG chemin_recherche    et ensuite demandé pg_get_constraintdef ()    pour le\nÉTRANGER CLÉ    syntaxe, test_schema    n&#39;a pas été inclus dans la sortie de\nla fonction.\nD&#39;autre part, si nous redéfinissons le chemin de recherche sur la valeur par défaut typique\nde Publique:\n\n\ntester=&gt; ENSEMBLE chemin_recherche À Publique;\nENSEMBLE\n\n\nLa même requête contre pg_get_constraintdef ()    retourne maintenant complètement\nnom qualifié du schéma pour nous:\n\n\ntester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;;\n                     pg_get_constraintdef\n-------------------------------------------------- -------------\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES test_schema.référé(identifiant)\n(1 rangée)\n\n\nSQLAlchemy utilisera par défaut la valeur de retour de pg_get_constraintdef ()\nafin de déterminer le nom du schéma distant. C’est-à-dire si notre chemin_recherche\nont été mis à inclure test_schemaet nous avons invoqué une table\nprocessus de réflexion comme suit:\n\n\n&gt;&gt;&gt; de sqlalchemy importation Table, MetaData, create_engine\n&gt;&gt;&gt; moteur = create_engine(&quot;postgresql: // scott: tiger @ localhost / test&quot;)\n&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta,\n...                       chargement automatique=Vrai, autoload_with=Connecticut)\n...\n\n\n\nLe processus ci-dessus fournirait à la MetaData.tables    collection\nréféré    table nommée sans pour autant le schéma:\n\n\n&gt;&gt;&gt; méta.les tables[[[[&#39;référé&#39;].schéma est Aucun\nVrai\n\n\nPour modifier le comportement de la réflexion de sorte que le schéma référencé soit\nmaintenu indépendamment de la chemin_recherche    réglage, utilisez le\npostgresql_ignore_search_path    option, qui peut être spécifiée en tant que\nargument spécifique au dialecte à la fois Table    aussi bien que\nMetaData.reflect ():\n\n\n&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta, chargement automatique=Vrai,\n...                       autoload_with=Connecticut,\n...                       postgresql_ignore_search_path=Vrai)\n...\n\n\n\nNous allons maintenant avoir test_schema.referred    stocké comme qualifié de schéma:\n\n\n&gt;&gt;&gt; méta.les tables[[[[&#39;test_schema.referred&#39;].schéma\n&#39;test_schema&#39;\n\n\nNotez que dans tous les cas, le schéma «par défaut» est toujours reflété comme\nAucun. Le schéma «par défaut» sur PostgreSQL est celui qui est renvoyé par le\nPostgreSQL current_schema ()    une fonction. Sur un PostgreSQL typique\nl&#39;installation, c&#39;est le nom Publique. Donc, un tableau qui fait référence à un autre\nqui est dans le Publique    (c&#39;est-à-dire par défaut) le schéma aura toujours le\n.schéma    attribut mis à Aucun.\n\nNouveau dans la version 0.9.2: Ajouté le postgresql_ignore_search_path\noption dialecte acceptée par Table    et\nMetaData.reflect ().\n\n\n\nINSERT / UPDATE… RETOURNER\nLe dialecte supporte les PG 8.2 INSERT..RECLINANT, MISE À JOUR..RECLINANT    et\nSUPPRIMER .. RETOURNER    syntaxes.   INSERT..RECLINANT    est utilisé par défaut\npour les instructions INSERT à une seule ligne afin d&#39;extraire les données nouvellement générées\nidentificateurs de clé primaire. Pour spécifier un explicite RETOUR    clause,\nUtilisez le _UpdateBase.returning ()    méthode par déclaration:\n\n\n# INSERT..RETURNING\nrésultat = table.insérer().rentrant(table.c.col1, table.c.col2).\n    valeurs(Nom=&#39;foo&#39;)\nimpression résultat.fetchall()\n\n# UPDATE..RETURNING\nrésultat = table.mise à jour().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;).valeurs(Nom=&#39;bar&#39;)\nimpression résultat.fetchall()\n\n# DELETE..RETURNING\nrésultat = table.effacer().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;)\nimpression résultat.fetchall()\n\n\n\n\n\nINSERT… SUR CONFLICT (Upsert)\nA partir de la version 9.5, PostgreSQL permet les «upserts» (mises à jour ou insert) de\nlignes dans une table via le SUR CONFLIT    clause de la INSÉRER    déclaration. UNE\nLa ligne candidate ne sera insérée que si cette ligne ne viole aucun code unique.\ncontraintes. Dans le cas d’une violation de contrainte unique, une action secondaire\npeut être soit “DO UPDATE”, indiquant que les données dans le fichier\nla ligne cible doit être mise à jour, ou “NE RIEN FAIRE”, ce qui indique de sauter silencieusement\ncette rangée.\nLes conflits sont déterminés à l&#39;aide de contraintes et d&#39;index uniques existants. Celles-ci\nles contraintes peuvent être identifiées en utilisant leur nom comme indiqué dans DDL,\nou ils peuvent être inféré en indiquant les colonnes et les conditions qui composent\nles index.\nSQLAlchemy fournit SUR CONFLIT    support via le spécifique PostgreSQL\npostgresql.dml.insert ()    fonction, qui fournit\nles méthodes génératives on_conflict_do_update ()\net on_conflict_do_nothing ():\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\ninsert_stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_existing_id&#39;,\n    Les données=&#39;valeur insérée&#39;)\n\ndo_nothing_stmt = insert_stmt.on_conflict_do_nothing(\n    éléments_index=[[[[&#39;id&#39;]\n)\n\nConnecticut.exécuter(do_nothing_stmt)\n\ndo_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;pk_my_table&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)\n\nConnecticut.exécuter(do_update_stmt)\n\n\nLes deux méthodes fournissent la &quot;cible&quot; du conflit en utilisant soit la\ncontrainte nommée ou par inférence de colonne:\n\n\nle Insert.on_conflict_do_update.index_elements    argument\nspécifie une séquence contenant des noms de colonne de chaîne, Colonne\ndes objets, et / ou des éléments d’expression SQL, qui identifieraient un unique\nindice:\n\n\ndo_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)\n\ndo_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.identifiant],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)\n\n\n\n\nLors de l&#39;utilisation Insert.on_conflict_do_update.index_elements    à\ndéduire un index, un index partiel peut être déduit en spécifiant également le\nUtilisez le Insert.on_conflict_do_update.index_where    paramètre:\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\nstmt = insérer(ma table).valeurs(utilisateur_email=&#39;a@b.com&#39;, Les données=&#39;données insérées&#39;)\nstmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.utilisateur_email],\n    index_where=ma table.c.utilisateur_email.comme(&#39;%@gmail.com&#39;),\n    ensemble_=dict(Les données=stmt.exclu.Les données)\n    )\nConnecticut.exécuter(stmt)\n\n\n\n\nle Insert.on_conflict_do_update.constraint    l&#39;argument est\nutilisé pour spécifier directement un index plutôt que de l&#39;inférer. Cela peut être\nle nom d&#39;une contrainte UNIQUE, d&#39;une contrainte PRIMARY KEY ou d&#39;un INDEX:\n\n\ndo_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_idx_1&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)\n\ndo_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_pk&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)\n\n\n\n\nle Insert.on_conflict_do_update.constraint    argument peut\nse référer également à une construction SQLAlchemy représentant une contrainte,\npar exemple. Contrainte unique, PrimaryKeyConstraint,\nIndice, ou ExcludeConstraint. Dans cette utilisation,\nsi la contrainte a un nom, elle est utilisée directement. Sinon, si le\ncontrainte est non nommée, alors l’inférence sera utilisée, où les expressions\net la clause optionnelle WHERE de la contrainte sera précisée dans le\nconstruction. Cette utilisation est particulièrement pratique\nfaire référence à la clé primaire nommée ou non nommée d&#39;un Table    en utilisant le\nTable.primary_key    attribut:\n\n\ndo_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=ma table.clé primaire,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)\n\n\n\n\nSUR CONFLIT ... FAIRE MISE À JOUR    est utilisé pour effectuer une mise à jour du déjà\nligne existante, en utilisant n&#39;importe quelle combinaison de nouvelles valeurs ainsi que de valeurs\nde l&#39;insertion proposée. Ces valeurs sont spécifiées à l&#39;aide du\nInsert.on_conflict_do_update.set_    paramètre. Cette\nparamètre accepte un dictionnaire composé de valeurs directes\npour UPDATE:\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\nstmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n    )\nConnecticut.exécuter(do_update_stmt)\n\n\nPour faire référence à la ligne d’insertion proposée, l’alias spécial\nexclu    est disponible en tant qu&#39;attribut sur\nle postgresql.dml.Insert    objet; cet objet est un\nColumnCollection    lequel alias contient toutes les colonnes de la cible\ntable:\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\nstmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    )\nConnecticut.exécuter(do_update_stmt)\n\n\nle Insert.on_conflict_do_update ()    méthode accepte également\nune clause WHERE utilisant le Insert.on_conflict_do_update.where\nparamètre, qui limitera les lignes qui reçoivent un UPDATE:\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\nstmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\non_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    où=(ma table.c.statut == 2)\n    )\nConnecticut.exécuter(on_update_stmt)\n\n\nSUR CONFLIT    peut également être utilisé pour ignorer l&#39;insertion complète d&#39;une ligne\nen cas de conflit avec une contrainte unique ou d&#39;exclusion; au dessous de\nceci est illustré en utilisant le\non_conflict_do_nothing ()    méthode:\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\nstmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing(éléments_index=[[[[&#39;id&#39;])\nConnecticut.exécuter(stmt)\n\n\nSi FAIRE RIEN    est utilisé sans spécifier de colonne ou de contrainte,\nil a pour effet de sauter l&#39;INSERT pour toute exception unique ou d&#39;exclusion\nviolation de contrainte qui se produit:\n\n\nde sqlalchemy.dialects.postgresql importation insérer\n\nstmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing()\nConnecticut.exécuter(stmt)\n\n\n\nNouveau dans la version 1.1: Ajout du support pour les clauses PostgreSQL ™ ON CONFLICT\n\n\n\n\nRecherche en texte intégral\nSQLAlchemy met à disposition le PostgreSQL @@    opérateur via le\nColumnElement.match ()    méthode sur toute expression de colonne textuelle.\nSur un dialecte PostgreSQL, une expression comme celle-ci:\n\n\nsélectionner([[[[quelque chose.c.texte.rencontre(&quot;chaîne de recherche&quot;)])\n\n\némettra dans la base de données:\n\n\nSÉLECTIONNER texte @@ to_tsquery(&#39;chaîne de recherche&#39;) DE table\n\n\nLes fonctions de recherche de texte PostgreSQL telles que to_tsquery ()\net to_tsvector ()    sont disponibles\nen utilisant explicitement la norme func    construction. Par exemple:\n\n\nsélectionner([[[[\n    func.to_tsvector(&quot;les gros chats mangeaient des rats&quot;).rencontre(&#39;chat et rat&#39;)\n])\n\n\nEmet l&#39;équivalent de:\n\n\nSÉLECTIONNER to_tsvector(&quot;les gros chats mangeaient des rats&quot;) @@ to_tsquery(&#39;chat et rat&#39;)\n\n\nle postgresql.TSVECTOR    type peut fournir des CAST explicites:\n\n\nde sqlalchemy.dialects.postgresql importation TSVECTOR\nde sqlalchemy importation sélectionner, jeter\nsélectionner([[[[jeter(&quot;Du texte&quot;, TSVECTOR)])\n\n\nproduit une déclaration équivalente à:\n\n\nSÉLECTIONNER JETER(&#39;Du texte&#39; COMME TSVECTOR) COMME anon_1\n\n\nLes recherches en texte intégral dans PostgreSQL sont influencées par la combinaison de:\nParamètre PostgreSQL de default_text_search_config, le regconfig    utilisé\npour construire les index GIN / GiST, et le regconfig    éventuellement passé\nlors d&#39;une requête.\nLorsque vous effectuez une recherche en texte intégral sur une colonne comportant un code GIN ou\nIndex GiST déjà pré-calculé (qui est commun au texte intégral\nrecherches), il peut être nécessaire de passer explicitement à un serveur PostgreSQL spécifique.\nregconfig    valeur pour assurer que le planificateur de requêtes utilise l&#39;index et\nne pas recalculer la colonne à la demande.\nAfin de permettre cette planification explicite des requêtes, ou d’utiliser différentes\nstratégies de recherche, la rencontre    méthode accepte un postgresql_regconfig\nargument de mot clé:\n\n\nsélectionner([[[[ma table.c.identifiant]).où(\n    ma table.c.Titre.rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n)\n\n\nEmet l&#39;équivalent de:\n\n\nSÉLECTIONNER ma table.identifiant DE ma table\nOÙ ma table.Titre @@ to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)\n\n\nOn peut aussi spécifiquement passer dans un «Regconfig» valeur à la\nto_tsvector ()    commande comme argument initial:\n\n\nsélectionner([[[[ma table.c.identifiant]).où(\n        func.to_tsvector(&#39;Anglais&#39;, ma table.c.Titre )\n        .rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n    )\n\n\nproduit une déclaration équivalente à:\n\n\nSÉLECTIONNER ma table.identifiant DE ma table\nOÙ to_tsvector(&#39;Anglais&#39;, ma table.Titre) @@\n    to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)\n\n\nIl est recommandé d’utiliser le EXPLIQUE ANALYSER...    outil de\nPostgreSQL ™ pour vous assurer que vous générez des requêtes avec SQLAlchemy qui\nTirez pleinement parti des index que vous avez éventuellement créés pour la recherche en texte intégral.\n\n\nDE SEULEMENT…\nLe dialecte prend en charge le mot clé ONLY de PostgreSQL pour ne cibler que certains\ntable dans une hiérarchie d&#39;héritage. Ceci peut être utilisé pour produire le\nSÉLECTIONNER ... DE SEULEMENT, MISE À JOUR SEULEMENT ..., et EFFACER DE SEULEMENT ...\nsyntaxes. Il utilise le mécanisme des astuces de SQLAlchemy:\n\n\n# SELECTIONNER ... A PARTIR DE ...\nrésultat = table.sélectionner().avec_hint(table, &#39;SEULEMENT&#39;, &#39;postgresql&#39;)\nimpression résultat.fetchall()\n\n# MISE À JOUR UNIQUEMENT ...\ntable.mise à jour(valeurs=dict(foo=&#39;bar&#39;)).avec_hint(&#39;SEULEMENT&#39;,\n                                               nom du dialecte=&#39;postgresql&#39;)\n\n# SUPPRIMER DE SEULEMENT ...\ntable.effacer().avec_hint(&#39;SEULEMENT&#39;, nom du dialecte=&#39;postgresql&#39;)\n\n\n\n\n\nOptions d&#39;index spécifiques à PostgreSQL\nPlusieurs extensions à la Indice    construct sont disponibles, spécifiques\nau dialecte PostgreSQL.\n\n\nIndex partiels\nLes index partiels ajoutent un critère à la définition de l’index afin que celui-ci soit\nappliqué à un sous-ensemble de lignes. Ceux-ci peuvent être spécifiés sur Indice\nen utilisant le postgresql_where    argument de mot clé:\n\n\nIndice(&#39;mon_index&#39;, ma table.c.identifiant, postgresql_where=ma table.c.valeur &gt; dix)\n\n\n\n\nClasses d&#39;opérateurs\nPostgreSQL permet la spécification d’un classe d&#39;opérateur pour chaque colonne de\nun index (voir\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).\nle Indice    la construction permet de les spécifier via le\npostgresql_ops    argument de mot clé:\n\n\nIndice(\n    &#39;mon_index&#39;, ma table.c.identifiant, ma table.c.Les données,\n    postgresql_ops=\n        &#39;Les données&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )\n\n\nNotez que les clés dans le postgresql_ops    dictionnaire sont le nom “clé” de\nle Colonnec&#39;est-à-dire le nom utilisé pour y accéder depuis le .c\ncollection de Table, qui peut être configuré pour être différent de\nle nom réel de la colonne tel qu&#39;il est exprimé dans la base de données.\nSi postgresql_ops    doit être utilisé contre une expression SQL complexe telle que\nen tant qu&#39;appel de fonction, pour l&#39;appliquer à la colonne, il faut lui attribuer une étiquette\nqui est identifié dans le dictionnaire par son nom, par exemple:\n\n\nIndice(\n    &#39;mon_index&#39;, ma table.c.identifiant,\n    func.inférieur(ma table.c.Les données).étiquette(&#39;data_lower&#39;),\n    postgresql_ops=\n        &#39;data_lower&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )\n\n\n\n\nTypes d&#39;index\nPostgreSQL fournit plusieurs types d’index: B-Tree, Hash, GiST et GIN, ainsi que\ncomme la possibilité pour les utilisateurs de créer leurs propres projets (voir\nhttp://www.postgresql.org/docs/8.3/static/indexes-types.html). Ceux-ci peuvent être\nspécifié sur Indice    en utilisant le postgresql_using    argument de mot clé:\n\n\nIndice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_using=&#39;Gin&#39;)\n\n\nLa valeur transmise à l’argument du mot clé sera simplement transmise au\ncommande CREATE INDEX sous-jacente, de sorte doit être un type d&#39;index valide pour votre\nversion de PostgreSQL.\n\n\n\nParamètres de stockage d&#39;index\nPostgreSQL permet de définir des paramètres de stockage sur des index. Le stockage\nles paramètres disponibles dépendent de la méthode d&#39;index utilisée par l&#39;index. Espace de rangement\nles paramètres peuvent être spécifiés sur Indice    en utilisant le postgresql_with\nargument de mot clé:\n\n\nIndice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_with=&quot;facteur de remplissage&quot;: 50)\n\n\nPostgreSQL permet de définir le tablespace dans lequel créer l&#39;index.\nLe tablespace peut être spécifié sur Indice    en utilisant le\npostgresql_tablespace    argument de mot clé:\n\n\nIndice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_tablespace=&#39;mon espace de tables&#39;)\n\n\nNotez que la même option est disponible sur Table    ainsi que.\n\n\n\nIndex avec concurremment\nL’option d’index CONCORDREMENT de PostgreSQL est supportée en passant le\ndrapeau postgresql_concurrently    à la Indice    construction:\n\n\ntbl = Table(&#39;testtbl&#39;, m, Colonne(&#39;Les données&#39;, Entier))\n\nidx1 = Indice(&#39;test_idx1&#39;, tbl.c.Les données, postgresql_concurrently=Vrai)\n\n\nLa construction d’index ci-dessus rendra le DDL pour CREATE INDEX, en supposant que\nPostgreSQL 8.2 ou supérieur est détecté ou pour un dialecte sans connexion, comme:\n\n\nCRÉER INDICE De manière concurrente test_idx1 SUR testtbl (Les données)\n\n\nPour DROP INDEX, en supposant que PostgreSQL 9.2 ou supérieur soit détecté ou pour\nun dialecte sans connexion, il émettra:\n\n\nLAISSEZ TOMBER INDICE De manière concurrente test_idx1\n\n\n\nNouveau dans la version 1.1: support pour concurremment sur DROP INDEX. le\nLe mot clé est simultanément émis uniquement si une version suffisamment élevée\nde PostgreSQL est détecté sur la connexion (ou pour une connexion sans connexion)\ndialecte).\n\nLors de l&#39;utilisation concurrente, la base de données PostgreSQL requiert que l&#39;instruction\nêtre appelé en dehors d&#39;un bloc de transaction. La base de données Python DBAPI\nmême pour une seule déclaration, une transaction est présente, donc pour utiliser cette\nle mode «autocommit» de DBAPI doit être utilisé:\n\n\nmétadonnées = MetaData()\ntable = Table(\n    &quot;foo&quot;, métadonnées,\n    Colonne(&quot;id&quot;, Chaîne))\nindice = Indice(\n    &quot;foo_idx&quot;, table.c.identifiant, postgresql_concurrently=Vrai)\n\navec moteur.relier() comme Connecticut:\n    avec Connecticut.execution_options(niveau_isolement=&#39;AUTOCOMMIT&#39;):\n        table.créer(Connecticut)\n\n\n\n\n\n\nPostgreSQL Index Reflection\nLa base de données PostgreSQL crée implicitement un INDEX UNIQUE chaque fois que le\nLa construction UNIQUE CONSTRAINT est utilisée. Lors de l&#39;inspection d&#39;une table en utilisant\nInspecteur, le Inspector.get_indexes ()\net le Inspector.get_unique_constraints ()    fera rapport sur ces\ndeux constructions distinctement; dans le cas de l&#39;index, la clé\nduplicates_constraint    sera présent dans l&#39;entrée d&#39;index s&#39;il est\ndétecté comme reflétant une contrainte. Lors de la réflexion en utilisant\nTable(..., autoload = True), l&#39;INDICE UNIQUE est ne pas revenu\ndans Table.indexes    quand il est détecté comme reflétant un\nContrainte unique    dans le Table.constraints    collection.\n\nModifié dans la version 1.0.0: &#8211; Table    la réflexion comprend maintenant\nContrainte unique    objets présents dans le Table.constraints\ncollection; le backend de PostgreSQL n&#39;inclura plus de “miroir”\nIndice    construire dans Table.indexes    si c&#39;est détecté\ncomme correspondant à une contrainte unique.\n\n\n\nOptions de réflexion spéciales\nle Inspecteur    utilisé pour le backend PostgreSQL est une instance\nde PGInspector, qui offre des méthodes supplémentaires:\n\n\nde sqlalchemy importation create_engine, inspecter\n\nmoteur = create_engine(&quot;postgresql + psycopg2: // localhost / test&quot;)\ninsp = inspecter(moteur)  # sera un PGInspector\n\nimpression(insp.get_enums())\n\n\n\n\nclasse sqlalchemy.dialects.postgresql.base.PGInspector(Connecticut)\n\nBases: sqlalchemy.engine.reflection.Inspector\n\n\nget_enums(schéma = Aucun)\n\nRetourne une liste d&#39;objets ENUM.\nChaque membre est un dictionnaire contenant ces champs:\n\n\n\n\nname &#8211; nom de l&#39;énum\n\n\nschéma &#8211; le nom du schéma pour l&#39;énumération.\n\n\nvisible &#8211; booléen, que cette énumération soit visible ou non\ndans le chemin de recherche par défaut.\n\n\nétiquettes &#8211; une liste d&#39;étiquettes de chaîne qui s&#39;appliquent à l&#39;énumération.\n\n\n\n\n\nParamètres\n\nschéma &#8211; nom du schéma. Si aucun, le schéma par défaut\n(généralement «public») est utilisé. Peut également être réglé sur &#39;*&#39; pour\nindiquez des énumérations de charge pour tous les schémas.\n\n\n\n\n\n\nget_foreign_table_names(schéma = Aucun)\n\nRenvoie une liste de noms FOREIGN TABLE.\nLe comportement est similaire à celui de Inspector.get_table_names (),\nsauf que la liste est limitée aux tables qui signalent une\nrelâchement    valeur de F.\n\n\n\n\nget_table_oid(nom de la table, schéma = Aucun)\n\nRenvoie l&#39;OID du nom de la table donnée.\n\n\n\n\nget_view_names(schéma = Aucun, include = (&#39;plain&#39;, &#39;matérialisé&#39;))\n\nRenvoyer tous les noms de vue dans schéma.\n\nParamètres\n\n\n\nschéma &#8211; Facultatif, récupérez les noms d&#39;un schéma autre que celui par défaut.\nPour les devis spéciaux, utilisez quoted_name.\n\n\ncomprendre &#8211; \nspécifier les types de vues à renvoyer. Passé\nsous forme de valeur de chaîne (pour un type unique) ou de tuple (pour un nombre quelconque)\nde types). Par défaut à (&#39;plaine&#39;, &#39;matérialisé&#39;).\n\n\n\n\n\n\n\n\n\n\n\nOptions de la table PostgreSQL\nPlusieurs options pour CREATE TABLE sont supportées directement par PostgreSQL\ndialecte en conjonction avec le Table    construction:\n\n\nTypes de tableau\nLe dialecte PostgreSQL supporte les tableaux, à la fois en tant que types de colonne multidimensionnels\nainsi que des littéraux de tableau:\n\n\nTypes JSON\nLe dialecte PostgreSQL prend en charge les types de données JSON et JSONB, y compris\nLe support natif de psycopg2 et celui de tous les logiciels spéciaux de PostgreSQL\nles opérateurs:\n\n\nType HSTORE\nLe type HSTORE PostgreSQL ainsi que les littéraux hstore sont pris en charge:\n\n\nTypes ENUM\nPostgreSQL a une structure TYPE pouvant être créée indépendamment qui est utilisée\npour implémenter un type énuméré. Cette approche introduit des\nla complexité du côté SQLAlchemy en termes de quand ce type devrait être\nCréé et abandonné. Le type object est aussi un reflet indépendant\nentité. Les sections suivantes doivent être consultées:\n\n\nUtiliser ENUM avec ARRAY\nLa combinaison de ENUM et ARRAY n’est pas directement prise en charge par le backend\nDBAPIs à ce moment. Pour envoyer et recevoir un ARRAY of ENUM,\nutilisez le type de solution de contournement suivant, qui décore le\npostgresql.ARRAY    Type de données.\n\n\nde sqlalchemy importation TypeDécorateur\nde sqlalchemy.dialects.postgresql importation Tableau\n\nclasse ArrayOfEnum(TypeDécorateur):\n    impl = Tableau\n\n    def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)\n\n    def result_processor(soi, dialecte, coltype):\n        super_rp = super(ArrayOfEnum, soi).result_processor(\n            dialecte, coltype)\n\n        def handle_raw_string(valeur):\n            interne = ré.rencontre(r&quot;^ (. *) $&quot;, valeur).groupe(1)\n            revenir interne.Divisé(&quot;,&quot;) si interne autre []\n\n        def processus(valeur):\n            si valeur est Aucun:\n                revenir Aucun\n            revenir super_rp(handle_raw_string(valeur))\n        revenir processus\n\n\nPar exemple.:\n\n\nTable(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, ArrayOfEnum(ENUM(&#39;une&#39;, &#39;b&#39;c&#39;, nom =&#39;myenum&#39;)))\n\n)\n\n\nCe type n&#39;est pas inclus en tant que type intégré car il serait incompatible\navec une DBAPI qui décide soudainement de soutenir ARRAY of ENUM directement dans\nune nouvelle version.\n\n\n\nUtilisation de JSON / JSONB avec ARRAY\nSemblable à utiliser ENUM, pour un ARRAY of JSON / JSONB, nous devons rendre le\nCAST approprié, cependant les pilotes psycopg2 actuels semblent gérer le résultat\npour ARRAY of JSON automatiquement, le type est donc plus simple:\n\n\nclasse CastingArray(Tableau):\n    def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)\n\n\nPar exemple.:\n\n\nTable(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, CastingArray(JSONB))\n)\n\n\n\n\n\nTypes de données PostgreSQL\nComme avec tous les dialectes SQLAlchemy, tous les types UPPERCASE connus pour être\nvalables avec PostgreSQL sont importables à partir du dialecte de niveau supérieur, que ce soit\nils proviennent de sqlalchemy.types    ou du dialecte local:\n\n\nde sqlalchemy.dialects.postgresql importation \n    Tableau, BIGINT, BIT, BOOLÉAN, BYTEA, CARBONISER, CIDR, DATE, \n    DOUBLE PRECISION, ENUM, FLOTTE, HSTORE, INET, ENTIER, \n    INTERVALLE, JSON, JSONB, MACADDR, ARGENT, NUMERIC, OID, REAL, SMALLINT, TEXT, \n    TEMPS, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \n    DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR\n\n\nTypes which are specific to PostgreSQL, or have PostgreSQL-specific\nconstruction arguments, are as follows:\n\n\nclass sqlalchemy.dialects.postgresql.aggregate_order_by(cible, *order_by)\n\nBases: sqlalchemy.sql.expression.ColumnElement\nRepresent a PostgreSQL aggregate order by expression.\nE.g.:\n\n\nde sqlalchemy.dialects.postgresql importation aggregate_order_by\nexpr = func.array_agg(aggregate_order_by(table.c.une, table.c.b.desc()))\nstmt = sélectionner([[[[expr])\n\n\nwould represent the expression:\n\n\nSELECT array_agg(une ORDER BY b DESC) FROM table;\n\n\nSimilarly:\n\n\nexpr = func.string_agg(\n    table.c.une,\n    aggregate_order_by(literal_column(&quot;&#39;,&#39;&quot;), table.c.une)\n)\nstmt = sélectionner([[[[expr])\n\n\nWould represent:\n\n\nSELECT string_agg(une, &#39;,&#39; ORDER BY une) FROM table;\n\n\n\nChanged in version 1.2.13: &#8211; the ORDER BY argument may be multiple terms\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.array(clauses, **kw)\n\nBases: sqlalchemy.sql.expression.Tuple\nA PostgreSQL ARRAY literal.\nThis is used to produce ARRAY literals in SQL expressions, e.g.:\n\n\nde sqlalchemy.dialects.postgresql importation array\nde sqlalchemy.dialects importation postgresql\nde sqlalchemy importation sélectionner, func\n\nstmt = sélectionner([[[[\n                array([[[[1,2]) + array([[[[3,4,5])\n            ])\n\nimpression(stmt.compile(dialect=postgresql.dialect()))\n\n\nProduces the SQL:\n\n\nSELECT ARRAY[[[[%(param_1)s, %(param_2)s] ||\n    ARRAY[[[[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1\n\n\nAn instance of array    will always have the datatype\nARRAY. The “inner” type of the array is inferred from\nthe values present, unless the type_    keyword argument is passed:\n\n\narray([[[[&#39;foo&#39;, &#39;bar&#39;], type_=CHAR)\n\n\nMultidimensional arrays are produced by nesting array    constructs.\nThe dimensionality of the final ARRAY    type is calculated by\nrecursively adding the dimensions of the inner ARRAY    type:\n\n\nstmt = sélectionner([[[[\n    array([[[[\n        array([[[[1, 2]), array([[[[3, 4]), array([[[[colonne(&#39;q&#39;), colonne(&#39;x&#39;)])\n    ])\n])\nimpression(stmt.compile(dialect=postgresql.dialect()))\n\n\nProduces:\n\n\nSELECT ARRAY[[[[ARRAY[[[[%(param_1)s, %(param_2)s],\nARRAY[[[[%(param_3)s, %(param_4)s], ARRAY[[[[q, x]] AS anon_1\n\n\n\nNew in version 1.3.6: added support for multidimensional array literals\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)\n\nBases: sqlalchemy.types.ARRAY\nPostgreSQL ARRAY type.\nle postgresql.ARRAY    type is constructed in the same way\nas the core types.ARRAY    type; a member type is required, and a\nnumber of dimensions is recommended if the type is to be used for more\nthan one dimension:\n\n\nde sqlalchemy.dialects importation postgresql\n\nmytable = Table(&quot;mytable&quot;, metadata,\n        Column(&quot;data&quot;, postgresql.ARRAY(Integer, dimensions=2))\n    )\n\n\nle postgresql.ARRAY    type provides all operations defined on the\ncore types.ARRAY    type, including support for “dimensions”,\nindexed access, and simple matching such as\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().  postgresql.ARRAY    class also\nprovides PostgreSQL-specific methods for containment operations, including\npostgresql.ARRAY.Comparator.contains()\npostgresql.ARRAY.Comparator.contained_by(), et\npostgresql.ARRAY.Comparator.overlap(), e.g.:\n\n\nmytable.c.Les données.contient([[[[1, 2])\n\n\nle postgresql.ARRAY    type may not be supported on all\nPostgreSQL DBAPIs; it is currently known to work on psycopg2 only.\nDe plus, le postgresql.ARRAY    type does not work directly in\nconjunction with the ENUM    type.  For a workaround, see the\nspecial type at Using ENUM with ARRAY.\n\n\nclass Comparator(expr)\n\nBases: sqlalchemy.types.Comparator\nDefine comparison operations for ARRAY.\nNote that these operations are in addition to those provided\nby the base types.ARRAY.Comparator    class, including\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().\n\n\ncontained_by(other)\n\nBoolean expression.  Test if elements are a proper subset of the\nelements of the argument array expression.\n\n\n\n\ncontient(other, **kwargs)\n\nBoolean expression.  Test if elements are a superset of the\nelements of the argument array expression.\n\n\n\n\noverlap(other)\n\nBoolean expression.  Test if array has elements in common with\nan argument array expression.\n\n\n\n\n\n\n__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)\n\nConstruct an ARRAY.\nE.g.:\n\n\nColumn(&#39;myarray&#39;, ARRAY(Integer))\n\n\nArguments are:\n\nParamètres\n\n\n\nitem_type – The data type of items of this array. Note that\ndimensionality is irrelevant here, so multi-dimensional arrays like\nINTEGER[][], are constructed as ARRAY(Integer), not as\nARRAY(ARRAY(Integer))    or such.\n\n\nas_tuple=False – Specify whether return results\nshould be converted to tuples from lists. DBAPIs such\nas psycopg2 return lists by default. When tuples are\nreturned, the results are hashable.\n\n\ndimensions – if non-None, the ARRAY will assume a fixed\nnumber of dimensions.  This will cause the DDL emitted for this\nARRAY to include the exact number of bracket clauses [],\nand will also optimize the performance of the type overall.\nNote that PG arrays are always implicitly “non-dimensioned”,\nmeaning they can store any number of dimensions no matter how\nthey were declared.\n\n\nzero_indexes=False &#8211; \nwhen True, index values will be converted\nbetween Python zero-based and PostgreSQL one-based indexes, e.g.\na value of one will be added to all index values before passing\nto the database.\n\n\n\n\n\n\n\n\n\n\nsqlalchemy.dialects.postgresql.array_agg(*arg, **kw)\n\nPostgreSQL-specific form of array_agg, ensures\nreturn type is postgresql.ARRAY    and not\nthe plain types.ARRAY, unless an explicit type_\nis passed.\n\n\n\n\nsqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=)\n\nA synonym for the ARRAY.Comparator.any()    method.\nThis method is legacy and is here for backwards-compatibility.\n\n\n\n\nsqlalchemy.dialects.postgresql.Tout(other, arrexpr, operator=)\n\nA synonym for the ARRAY.Comparator.all()    method.\nThis method is legacy and is here for backwards-compatibility.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)\n\nBases: sqlalchemy.types.TypeEngine\n\n\n\n\nclass sqlalchemy.dialects.postgresql.BYTEA(length=None)\n\nBases: sqlalchemy.types.LargeBinary\n\n\n__init__(length=None)\n\nConstruct a LargeBinary type.\n\nParamètres\n\nlength – optional, a length for the column for use in\nDDL statements, for those binary types that accept a length,\nsuch as the MySQL BLOB type.\n\n\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.CIDR\n\nBases: sqlalchemy.types.TypeEngine\n\n\n\n\nclass sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None)\n\nBases: sqlalchemy.types.Float\n\n\n__init__(precision=None, asdecimal=False, decimal_return_scale=None)\n\nConstruct a Float.\n\nParamètres\n\n\n\nprécision – the numeric precision for use in DDL CREATE\nTABLE.\n\n\nasdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.\n\n\ndecimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.\n\n\n\n\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)\n\nBases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum\nPostgreSQL ENUM type.\nThis is a subclass of types.Enum    which includes\nsupport for PG’s CREATE TYPE    et DROP TYPE.\nWhen the builtin type types.Enum    is used and the\nEnum.native_enum    flag is left at its default of\nTrue, the PostgreSQL backend will use a postgresql.ENUM\ntype as the implementation, so the special create/drop rules\nwill be used.\nThe create/drop behavior of ENUM is necessarily intricate, due to the\nawkward relationship the ENUM type has in relationship to the\nparent table, in that it may be “owned” by just a single table, or\nmay be shared among many tables.\nWhen using types.Enum    ou postgresql.ENUM\nin an “inline” fashion, the CREATE TYPE    et DROP TYPE    is emitted\ncorresponding to when the Table.create()    et Table.drop()\nmethods are called:\n\n\ntable = Table(&#39;sometable&#39;, metadata,\n    Column(&#39;some_enum&#39;, ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;))\n)\n\ntable.create(engine)  # will emit CREATE ENUM and CREATE TABLE\ntable.drop(engine)  # will emit DROP TABLE and DROP ENUM\n\n\nTo use a common enumerated type between multiple tables, the best\npractice is to declare the types.Enum    ou\npostgresql.ENUM    independently, and associate it with the\nMetaData    object itself:\n\n\nmy_enum = ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;, metadata=metadata)\n\nt1 = Table(&#39;sometable_one&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)\n\nt2 = Table(&#39;sometable_two&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)\n\n\nWhen this pattern is used, care must still be taken at the level\nof individual table creates.  Emitting CREATE TABLE without also\nspecifying checkfirst=True    will still cause issues:\n\n\nt1.create(engine) # will fail: no such type &#39;myenum&#39;\n\n\nIf we specify checkfirst=True, the individual table-level create\noperation will check for the ENUM    and create if not exists:\n\n\n# will check if enum exists, and emit CREATE TYPE if not\nt1.create(engine, checkfirst=True)\n\n\nWhen using a metadata-level ENUM type, the type will always be created\nand dropped if either the metadata-wide create/drop is called:\n\n\nmetadata.create_all(engine)  # will emit CREATE TYPE\nmetadata.drop_all(engine)  # will emit DROP TYPE\n\n\nThe type can also be created and dropped directly:\n\n\nmy_enum.create(engine)\nmy_enum.drop(engine)\n\n\n\nChanged in version 1.0.0: The PostgreSQL postgresql.ENUM    type\nnow behaves more strictly with regards to CREATE/DROP.  A metadata-level\nENUM type will only be created and dropped at the metadata level,\nnot the table level, with the exception of\ntable.create(checkfirst=True).\nle table.drop()    call will now emit a DROP TYPE for a table-level\nenumerated type.\n\n\n\n__init__(*enums, **kw)\n\nConstruct an ENUM.\nArguments are the same as that of\ntypes.Enum, but also including\nthe following parameters.\n\nParamètres\n\ncreate_type – Defaults to True.\nIndicates that CREATE TYPE    should be\nemitted, after optionally checking for the\npresence of the type, when the parent\ntable is being created; and additionally\ncette DROP TYPE    is called when the table\nis dropped. Quand Faux, no check\nwill be performed and no CREATE TYPE\nou DROP TYPE    is emitted, unless\ncreate()\nou drop()\nare called directly.\nSetting to Faux    is helpful\nwhen invoking a creation scheme to a SQL file\nwithout access to the actual database &#8211;\nle create()    et\ndrop()    methods can\nbe used to emit SQL to a target bind.\n\n\n\n\n\n\ncreate(bind=None, checkfirst=True)\n\nÉmettre CREATE TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL CREATE TYPE, no action is taken.\n\nParamètres\n\n\n\nbind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.\n\n\ncheckfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type does not exist already before\ncreating.\n\n\n\n\n\n\n\n\ndrop(bind=None, checkfirst=True)\n\nÉmettre DROP TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL DROP TYPE, no action is taken.\n\nParamètres\n\n\n\nbind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.\n\n\ncheckfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type actually exists before dropping.\n\n\n\n\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.HSTORE(text_type=None)\n\nBases: sqlalchemy.types.Indexable, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL HSTORE type.\nle HSTORE    type stores dictionaries containing strings, e.g.:\n\n\ndata_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, HSTORE)\n)\n\navec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )\n\n\nHSTORE    provides for a wide range of operations, including:\n\n\nIndex operations:\n\n\ndata_table.c.Les données[[[[&#39;some key&#39;] == &#39;some value&#39;\n\n\n\n\nContainment operations:\n\n\ndata_table.c.Les données.has_key(&#39;some key&#39;)\n\ndata_table.c.Les données.has_all([[[[&#39;one&#39;, &#39;two&#39;, &#39;three&#39;])\n\n\n\n\nConcatenation:\n\n\ndata_table.c.Les données + &quot;k1&quot;: &quot;v1&quot;\n\n\n\n\nFor a full list of special methods see\nHSTORE.comparator_factory.\nFor usage with the SQLAlchemy ORM, it may be desirable to combine\nthe usage of HSTORE    avec MutableDict    dictionary\nnow part of the sqlalchemy.ext.mutable\nextension.  This extension will allow “in-place” changes to the\ndictionary, e.g. addition of new keys or replacement/removal of existing\nkeys to/from the current dictionary, to produce events which will be\ndetected by the unit of work:\n\n\nde sqlalchemy.ext.mutable importation MutableDict\n\nclass MyClass(Base):\n    __tablename__ = &#39;data_table&#39;\n\n    identifiant = Column(Integer, primary_key=True)\n    Les données = Column(MutableDict.as_mutable(HSTORE))\n\nmy_object = session.query(MyClass).un()\n\n# in-place mutation, requires Mutable extension\n# in order for the ORM to detect\nmy_object.Les données[[[[&#39;some_key&#39;] = &#39;some value&#39;\n\nsession.commit()\n\n\nWhen the sqlalchemy.ext.mutable    extension is not used, the ORM\nwill not be alerted to any changes to the contents of an existing\ndictionary, unless that dictionary value is re-assigned to the\nHSTORE-attribute itself, thus generating a change event.\n\nVoir également\nhstore    &#8211; render the PostgreSQL hstore()    une fonction.\n\n\n\nclass Comparator(expr)\n\nBases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator\nDefine comparison operations for HSTORE.\n\n\narray()\n\nText array expression.  Returns array of alternating keys and\nvaleurs.\n\n\n\n\ncontained_by(other)\n\nBoolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.\n\n\n\n\ncontient(other, **kwargs)\n\nBoolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.\n\n\n\n\ndefined(clé)\n\nBoolean expression.  Test for presence of a non-NULL value for\nthe key.  Note that the key may be a SQLA expression.\n\n\n\n\neffacer(clé)\n\nHStore expression.  Returns the contents of this hstore with the\ngiven key deleted.  Note that the key may be a SQLA expression.\n\n\n\n\nhas_all(other)\n\nBoolean expression.  Test for presence of all keys in jsonb\n\n\n\n\nhas_any(other)\n\nBoolean expression.  Test for presence of any key in jsonb\n\n\n\n\nhas_key(other)\n\nBoolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.\n\n\n\n\nkeys()\n\nText array expression.  Returns array of keys.\n\n\n\n\nmatrix()\n\nText array expression.  Returns array of [key, value] pairs.\n\n\n\n\nslice(array)\n\nHStore expression.  Returns a subset of an hstore defined by\narray of keys.\n\n\n\n\nvals()\n\nText array expression.  Returns array of values.\n\n\n\n\n\n\n__init__(text_type=None)\n\nConstruct a new HSTORE.\n\nParamètres\n\ntext_type &#8211; \nthe type that should be used for indexed values.\nDefaults to types.Text.\n\n\n\n\n\n\nbind_processor(dialect)\n\nReturn a conversion function for processing bind values.\nReturns a callable which will receive a bind parameter value\nas the sole positional argument and will return a value to\nsend to the DB-API.\nIf processing is not necessary, the method should return None.\n\nParamètres\n\ndialect – Dialect instance in use.\n\n\n\n\n\n\ncomparator_factory\n\nalias of HSTORE.Comparator\n\n\n\n\nresult_processor(dialect, coltype)\n\nReturn a conversion function for processing result row values.\nReturns a callable which will receive a result row column\nvalue as the sole positional argument and will return a value\nto return to the user.\nIf processing is not necessary, the method should return None.\n\nParamètres\n\n\n\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)\n\nBases: sqlalchemy.sql.functions.GenericFunction\nConstruct an hstore value within a SQL expression using the\nPostgreSQL hstore()    une fonction.\nle hstore    function accepts one or two arguments as described\nin the PostgreSQL documentation.\nE.g.:\n\n\nde sqlalchemy.dialects.postgresql importation array, hstore\n\nsélectionner([[[[hstore(&#39;key1&#39;, &#39;value1&#39;)])\n\nsélectionner([[[[\n        hstore(\n            array([[[[&#39;key1&#39;, &#39;key2&#39;, &#39;key3&#39;]),\n            array([[[[&#39;value1&#39;, &#39;value2&#39;, &#39;value3&#39;])\n        )\n    ])\n\n\n\nVoir également\nHSTORE    &#8211; the PostgreSQL HSTORE    datatype.\n\n\n\ntype\n\nalias of HSTORE\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.INET\n\nBases: sqlalchemy.types.TypeEngine\n\n\n\n\nclass sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)\n\nBases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval\nPostgreSQL INTERVAL type.\nThe INTERVAL type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000 or zxjdbc.\n\n\n__init__(precision=None, fields=None)\n\nConstruct an INTERVAL.\n\nParamètres\n\n\n\nprécision – optional integer precision value\n\n\nfields &#8211; \nstring fields specifier.  allows storage of fields\nto be limited, such as &quot;YEAR&quot;, &quot;MONTH&quot;, &quot;DAY TO HOUR&quot;,\netc.\n\n\n\n\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)\n\nBases: sqlalchemy.types.JSON\nRepresent the PostgreSQL JSON type.\nThis type is a specialization of the Core-level types.JSON\ntype.   Be sure to read the documentation for types.JSON    for\nimportant tips regarding treatment of NULL values and ORM use.\nThe operators provided by the PostgreSQL version of JSON\ninclude:\n\n\nIndex operations (the -&gt;    operator):\n\n\ndata_table.c.Les données[[[[&#39;some key&#39;]\n\ndata_table.c.Les données[[[[5]\n\n\n\n\nIndex operations returning text (the -&gt;&gt;    operator):\n\n\ndata_table.c.Les données[[[[&#39;some key&#39;].astext == &#39;some value&#39;\n\n\n\n\nIndex operations with CAST\n(equivalent to CAST(col -&gt;&gt; [&#39;some[&#39;some['some['some key&#39;] AS )):\n\n\ndata_table.c.Les données[[[[&#39;some key&#39;].astext.jeter(Integer) == 5\n\n\n\n\nPath index operations (the #&gt;    operator):\n\n\ndata_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)]\n\n\n\n\nPath index operations returning text (the #&gt;&gt;    operator):\n\n\ndata_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)].astext == &#39;some value&#39;\n\n\n\n\n\nChanged in version 1.1: le ColumnElement.cast()    operator on\nJSON objects now requires that the JSON.Comparator.astext\nmodifier be called explicitly, if the cast works only from a textual\nstring.\n\nIndex operations return an expression object whose type defaults to\nJSON    by default, so that further JSON-oriented instructions\nmay be called upon the result type.\nCustom serializers and deserializers are specified at the dialect level,\nthat is using create_engine(). The reason for this is that when\nusing psycopg2, the DBAPI only allows serializers at the per-cursor\nor per-connection level.   E.g.:\n\n\nengine = create_engine(&quot;postgresql://scott:tiger@localhost/test&quot;,\n                        json_serializer=my_serialize_fn,\n                        json_deserializer=my_deserialize_fn\n                )\n\n\nWhen using the psycopg2 dialect, the json_deserializer is registered\nagainst the database using psycopg2.extras.register_default_json.\n\n\nclass Comparator(expr)\n\nBases: sqlalchemy.types.Comparator\nDefine comparison operations for JSON.\n\n\nproperty astext\n\nOn an indexed expression, use the “astext” (e.g. “-&gt;&gt;”)\nconversion when rendered in SQL.\nE.g.:\n\n\nsélectionner([[[[data_table.c.Les données[[[[&#39;some key&#39;].astext])\n\n\n\n\n\n\n\n\n__init__(none_as_null=False, astext_type=None)\n\nConstruct a JSON    type.\n\nParamètres\n\n\n\nnone_as_null &#8211; \nif True, persist the value None    as a\nSQL NULL value, not the JSON encoding of nul. Note that\nwhen this flag is False, the null()    construct can still\nbe used to persist a NULL value:\n\n\nde sqlalchemy importation nul\nconn.execute(table.insérer(), Les données=nul())\n\n\n\nChanged in version 0.9.8: &#8211; Added none_as_null, et null()\nis now supported in order to persist a NULL value.\n\n\n\nastext_type &#8211; \nthe type to use for the\nJSON.Comparator.astext\naccessor on indexed attributes.  Defaults to types.Text.\n\n\n\n\n\n\n\n\ncomparator_factory\n\nalias of JSON.Comparator\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)\n\nBases: sqlalchemy.dialects.postgresql.json.JSON\nRepresent the PostgreSQL JSONB type.\nle JSONB    type stores arbitrary JSONB format data, e.g.:\n\n\ndata_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, JSONB)\n)\n\navec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )\n\n\nle JSONB    type includes all operations provided by\nJSON, including the same behaviors for indexing operations.\nIt also adds additional operators specific to JSONB, including\nJSONB.Comparator.has_key(), JSONB.Comparator.has_all(),\nJSONB.Comparator.has_any(), JSONB.Comparator.contains(),\net JSONB.Comparator.contained_by().\nComme le JSON    type, the JSONB    type does not detect\nin-place changes when used with the ORM, unless the\nsqlalchemy.ext.mutable    extension is used.\nCustom serializers and deserializers\nare shared with the JSON    class, using the json_serializer\net json_deserializer    keyword arguments.  These must be specified\nat the dialect level using create_engine(). When using\npsycopg2, the serializers are associated with the jsonb type using\npsycopg2.extras.register_default_jsonb    on a per-connection basis,\nin the same way that psycopg2.extras.register_default_json    is used\nto register these handlers with the json type.\n\n\nclass Comparator(expr)\n\nBases: sqlalchemy.dialects.postgresql.json.Comparator\nDefine comparison operations for JSON.\n\n\ncontained_by(other)\n\nBoolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.\n\n\n\n\ncontient(other, **kwargs)\n\nBoolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.\n\n\n\n\nhas_all(other)\n\nBoolean expression.  Test for presence of all keys in jsonb\n\n\n\n\nhas_any(other)\n\nBoolean expression.  Test for presence of any key in jsonb\n\n\n\n\nhas_key(other)\n\nBoolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.\n\n\n\n\n\n\ncomparator_factory\n\nalias of JSONB.Comparator\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.MACADDR\n\nBases: sqlalchemy.types.TypeEngine\n\n\n\n\nclass sqlalchemy.dialects.postgresql.ARGENT\n\nBases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL MONEY type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.OID\n\nBases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL OID type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)\n\nBases: sqlalchemy.types.Float\nThe SQL REAL type.\n\n\n__init__(precision=None, asdecimal=False, decimal_return_scale=None)\n\nConstruct a Float.\n\nParamètres\n\n\n\nprécision – the numeric precision for use in DDL CREATE\nTABLE.\n\n\nasdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.\n\n\ndecimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.\n\n\n\n\n\n\n\n\n\n\nclass sqlalchemy.dialects.postgresql.REGCLASS\n\nBases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL REGCLASS type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.TSVECTOR\n\nBases: sqlalchemy.types.TypeEngine\nle postgresql.TSVECTOR    type implements the PostgreSQL\ntext search type TSVECTOR.\nIt can be used to do full text queries on natural language\ndocuments.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.UUID(as_uuid=False)\n\nBases: sqlalchemy.types.TypeEngine\nPostgreSQL UUID type.\nRepresents the UUID column type, interpreting\ndata either as natively returned by the DBAPI\nor as Python uuid objects.\nThe UUID type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000.\n\n\n__init__(as_uuid=False)\n\nConstruct a UUID type.\n\nParamètres\n\nas_uuid=False – if True, values will be interpreted\nas Python uuid objects, converting to/from string via the\nDBAPI.\n\n\n\n\n\n\n\nRange Types\nThe new range column types found in PostgreSQL 9.2 onwards are\ncatered for by the following types:\n\n\nclass sqlalchemy.dialects.postgresql.INT4RANGE\n\nBases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT4RANGE type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.INT8RANGE\n\nBases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT8RANGE type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.NUMRANGE\n\nBases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL NUMRANGE type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.DATERANGE\n\nBases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL DATERANGE type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.TSRANGE\n\nBases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSRANGE type.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.TSTZRANGE\n\nBases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSTZRANGE type.\n\n\nThe types above get most of their functionality from the following\nmixin:\n\n\nclass sqlalchemy.dialects.postgresql.ranges.RangeOperators\n\nThis mixin provides functionality for the Range Operators\nlisted in Table 9-44 of the postgres documentation for Range\nFunctions and Operators. It is used by all the range types\nprovided in the postgres    dialect and can likely be used for\nany range types you create yourself.\nNo extra support is provided for the Range Functions listed in\nTable 9-45 of the postgres documentation. For these, the normal\nfunc()    object should be used.\n\n\nclass comparator_factory(expr)\n\nBases: sqlalchemy.types.Comparator\nDefine comparison operations for range types.\n\n\n__ne__(other)\n\nBoolean expression. Returns true if two ranges are not equal\n\n\n\n\nadjacent_to(other)\n\nBoolean expression. Returns true if the range in the column\nis adjacent to the range in the operand.\n\n\n\n\ncontained_by(other)\n\nBoolean expression. Returns true if the column is contained\nwithin the right hand operand.\n\n\n\n\ncontient(other, **kw)\n\nBoolean expression. Returns true if the right hand operand,\nwhich can be an element or a range, is contained within the\ncolumn.\n\n\n\n\nnot_extend_left_of(other)\n\nBoolean expression. Returns true if the range in the column\ndoes not extend left of the range in the operand.\n\n\n\n\nnot_extend_right_of(other)\n\nBoolean expression. Returns true if the range in the column\ndoes not extend right of the range in the operand.\n\n\n\n\noverlaps(other)\n\nBoolean expression. Returns true if the column overlaps\n(has points in common with) the right hand operand.\n\n\n\n\nstrictly_left_of(other)\n\nBoolean expression. Returns true if the column is strictly\nleft of the right hand operand.\n\n\n\n\nstrictly_right_of(other)\n\nBoolean expression. Returns true if the column is strictly\nright of the right hand operand.\n\n\n\n\n\n\n\nAttention\nThe range type DDL support should work with any PostgreSQL DBAPI\ndriver, however the data types returned may vary. If you are using\npsycopg2, it’s recommended to upgrade to version 2.5 or later\nbefore using these column types.\n\nWhen instantiating models that use these column types, you should pass\nwhatever data type is expected by the DBAPI driver you’re using for\nthe column type. Pour psycopg2    these are\npsycopg2.extras.NumericRange,\npsycopg2.extras.DateRange,\npsycopg2.extras.DateTimeRange    et\npsycopg2.extras.DateTimeTZRange    or the class you’ve\nregistered with psycopg2.extras.register_range.\nPar exemple:\n\n\nde psycopg2.extras importation DateTimeRange\nde sqlalchemy.dialects.postgresql importation TSRANGE\n\nclass RoomBooking(Base):\n\n    __tablename__ = &#39;room_booking&#39;\n\n    room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())\n\nbooking = RoomBooking(\n    room=101,\n    pendant=DateTimeRange(datetime(2013, 3, 23), None)\n)\n\n\n\n\n\nPostgreSQL Constraint Types\nSQLAlchemy supports PostgreSQL EXCLUDE constraints via the\nExcludeConstraint    class:\n\n\nclass sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)\n\nBases: sqlalchemy.schema.ColumnCollectionConstraint\nA table-level EXCLUDE constraint.\nDefines an EXCLUDE constraint as described in the postgres\ndocumentation.\n\n\n__init__(*elements, **kw)\n\nCréé un ExcludeConstraint    object.\nE.g.:\n\n\nconst = ExcludeConstraint(\n    (Column(&#39;period&#39;), &#39;&amp;&amp;&#39;),\n    (Column(&#39;group&#39;), &#39;=&#39;),\n    where=(Column(&#39;group&#39;) != &#39;some group&#39;)\n)\n\n\nThe constraint is normally embedded into the Table    construct\ndirectly, or added later using append_constraint():\n\n\nsome_table = Table(\n    &#39;some_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;period&#39;, TSRANGE()),\n    Column(&#39;group&#39;, Chaîne)\n)\n\nsome_table.append_constraint(\n    ExcludeConstraint(\n        (some_table.c.period, &#39;&amp;&amp;&#39;),\n        (some_table.c.group, &#39;=&#39;),\n        where=some_table.c.group != &#39;some group&#39;,\n        name=&#39;some_table_excl_const&#39;\n    )\n)\n\n\n\nParamètres\n\n\n\n*elements – A sequence of two tuples of the form (column, operator)    where\n“column” is a SQL expression element or a raw SQL string, most\ntypically a Column    object, and “operator” is a string\ncontaining the operator to use.   In order to specify a column name\nwhen a  Column    object is not available, while ensuring\nthat any necessary quoting rules take effect, an ad-hoc\nColumn    ou sql.expression.column()    object should be\nused.\n\n\nname – Optional, the in-database name of this constraint.\n\n\ndeferrable – Optional bool.  If set, emit DEFERRABLE or NOT DEFERRABLE when\nissuing DDL for this constraint.\n\n\ninitialement – Optional string.  If set, emit INITIALLY  when issuing DDL\nfor this constraint.\n\n\nusing – Optional string.  If set, emit USING  when issuing DDL\nfor this constraint. Defaults to ‘gist’.\n\n\nwhere &#8211; \nOptional SQL expression construct or literal SQL string.\nIf set, emit WHERE \n when issuing DDL\nfor this constraint.\n\nAttention\nle ExcludeConstraint.where    argument to ExcludeConstraint    can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given.  DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.\n\n\n\n\n\n\n\n\n\nPar exemple:\nde sqlalchemy.dialects.postgresql importation ExcludeConstraint, TSRANGE\n\nclass RoomBooking(Base):\n\n    __tablename__ = &#39;room_booking&#39;\n\n    room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())\n\n    __table_args__ = (\n        ExcludeConstraint((&#39;room&#39;, &#39;=&#39;), (&#39;during&#39;, &#39;&amp;&amp;&#39;)),\n    )\n\n\n\n\nPostgreSQL DML Constructs\n\n\nsqlalchemy.dialects.postgresql.dml.insérer(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)\n\nConstruct a new Insert    object.\nThis constructor is mirrored as a public API function; voir insert()    for a full usage and argument description.\n\n\n\n\nclass sqlalchemy.dialects.postgresql.dml.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)\n\nBases: sqlalchemy.sql.expression.Insert\nPostgreSQL-specific implementation of INSERT.\nAdds methods for PG-specific syntaxes such as ON CONFLICT.\n\n\nexcluded\n\nProvide the excluded    namespace for an ON CONFLICT statement\nPG’s ON CONFLICT clause allows reference to the row that would\nbe inserted, known as excluded. This attribute provides\nall columns in this row to be referenceable.\n\n\n\n\non_conflict_do_nothing(constraint=None, index_elements=None, index_where=None)\n\nSpecifies a DO NOTHING action for ON CONFLICT clause.\nle constraint    et index_elements    arguments\nare optional, but only one of these can be specified.\n\nParamètres\n\n\n\nconstraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.\n\n\nindex_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.\n\n\nindex_where &#8211; \nAdditional WHERE criterion that can be used to infer a\nconditional target index.\n\n\n\n\n\n\n\n\non_conflict_do_update(constraint=None, index_elements=None, index_where=None, set_=None, where=None)\n\nSpecifies a DO UPDATE SET action for ON CONFLICT clause.\nEither the constraint    ou index_elements    argument is\nrequired, but only one of these can be specified.\n\nParamètres\n\n\n\nconstraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.\n\n\nindex_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.\n\n\nindex_where – Additional WHERE criterion that can be used to infer a\nconditional target index.\n\n\nset_ &#8211; \nRequired argument. A dictionary or other mapping object\nwith column names as keys and expressions or literals as values,\nspecifying the SET    actions to take.\nIf the target Column    specifies a “.key” attribute distinct\nfrom the column name, that key should be used.\n\nAttention\nThis dictionary does ne pas take into account\nPython-specified default UPDATE values or generation functions,\ne.g. those specified using Column.onupdate.\nThese values will not be exercised for an ON CONFLICT style of\nUPDATE, unless they are manually specified in the\nInsert.on_conflict_do_update.set_    dictionary.\n\n\n\nwhere &#8211; \nOptional argument. If present, can be a literal SQL\nstring or an acceptable expression for a WHERE    clause\nthat restricts the rows affected by DO UPDATE SET. Rows\nnot meeting the WHERE    condition will not be updated\n(effectively a DO NOTHING    for those rows).\n\n\n\n\n\n\n\n\n\n\n\npsycopg2\nSupport for the PostgreSQL database via the psycopg2 driver.\n\nDBAPI\nDocumentation and download information (if applicable) for psycopg2 is available at:\nhttp://pypi.python.org/pypi/psycopg2/\n\n\nConnecting\nConnect String:\n\n\npostgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]\n\n\n\n\npsycopg2 Connect Arguments\npsycopg2-specific keyword arguments which are accepted by\ncreate_engine()    sont:\n\n\nserver_side_cursors: Enable the usage of “server side cursors” for SQL\nstatements which support this feature. What this essentially means from a\npsycopg2 point of view is that the cursor is created using a name, e.g.\nconnection.cursor(&#39;some name&#39;), which has the effect that result rows\nare not immediately pre-fetched and buffered after statement execution, but\nare instead left on the server and only retrieved as needed. SQLAlchemy’s\nResultProxy    uses special row-buffering\nbehavior when this feature is enabled, such that groups of 100 rows at a\ntime are fetched over the wire to reduce conversational overhead.\nNote that the Connection.execution_options.stream_results\nexecution option is a more targeted\nway of enabling this mode on a per-execution basis.\n\n\nuse_native_unicode: Enable the usage of Psycopg2 “native unicode” mode\nper connection.  True by default.\n\n\nisolation_level: This option, available for all PostgreSQL dialects,\ncomprend le AUTOCOMMIT    isolation level when using the psycopg2\ndialect.\n\n\nclient_encoding: sets the client encoding in a libpq-agnostic way,\nusing psycopg2’s set_client_encoding()    method.\n\n\nexecutemany_mode, executemany_batch_page_size,\nexecutemany_values_page_size: Allows use of psycopg2\nextensions for optimizing “executemany”-stye queries.  See the referenced\nsection below for details.\n\n\nuse_batch_mode: this is the previous setting used to affect “executemany”\nmode and is now deprecated.\n\n\n\n\nUnix Domain Connections\npsycopg2 supports connecting via Unix domain connections.   When the hôte\nportion of the URL is omitted, SQLAlchemy passes None    to psycopg2,\nwhich specifies Unix-domain communication rather than TCP/IP communication:\n\n\ncreate_engine(&quot;postgresql+psycopg2://user:password@/dbname&quot;)\n\n\nBy default, the socket file used is to connect to a Unix-domain socket\ndans /tmp, or whatever socket directory was specified when PostgreSQL\nwas built.  This value can be overridden by passing a pathname to psycopg2,\nusing hôte    as an additional keyword argument:\n\n\ncreate_engine(&quot;postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql&quot;)\n\n\n\n\nEmpty DSN Connections / Environment Variable Connections\nThe psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the\nlibpq client library, which by default indicates to connect to a localhost\nPostgreSQL database that is open for “trust” connections.  This behavior can be\nfurther tailored using a particular set of environment variables which are\nprefixed with PG_..., which are  consumed by libpq    to take the place of\nany or all elements of the connection string.\nFor this form, the URL can be passed without any elements other than the\ninitial scheme:\n\n\nengine = create_engine(&#39;postgresql+psycopg2://&#39;)\n\n\nIn the above form, a blank “dsn” string is passed to the psycopg2.connect()\nfunction which in turn represents an empty DSN passed to libpq.\n\nNew in version 1.3.2: support for parameter-less connections with psycopg2.\n\n\nVoir également\nEnvironment Variables &#8211;\nPostgreSQL documentation on how to use PG_...\nenvironment variables for connections.\n\n\n\n\nPer-Statement/Connection Execution Options\nThe following DBAPI-specific options are respected when used with\nConnection.execution_options(), Executable.execution_options(),\nQuery.execution_options(), in addition to those not specific to DBAPIs:\n\n\nisolation_level    &#8211; Set the transaction isolation level for the lifespan\nd&#39;un Connection    (can only be set on a connection, not a statement\nor query).   See Psycopg2 Transaction Isolation Level.\n\n\nstream_results    &#8211; Enable or disable usage of psycopg2 server side\ncursors &#8211; this feature makes use of “named” cursors in combination with\nspecial result handling methods so that result rows are not fully buffered.\nSi None    or not set, the server_side_cursors    option of the\nMoteur    est utilisé.\n\n\nmax_row_buffer    &#8211; when using stream_results, an integer value that\nspecifies the maximum number of rows to buffer at a time.  This is\ninterpreted by the BufferedRowResultProxy, and if omitted the\nbuffer will grow to ultimately store 1000 rows at a time.\n\n\n\n\n\nPsycopg2 Fast Execution Helpers\nModern versions of psycopg2 include a feature known as\nFast Execution Helpers , which\nhave been shown in benchmarking to improve psycopg2’s executemany()\nperformance, primarily with INSERT statements, by multiple orders of magnitude.\nSQLAlchemy allows this extension to be used for all executemany()    style\ncalls invoked by an Moteur    when used with multiple parameter\nensembles, which includes the use of this feature both by the\nCore as well as by the ORM for inserts of objects with non-autogenerated\nprimary key values, by adding the executemany_mode    flag to\ncreate_engine():\n\n\nengine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;batch&#39;)\n\n\n\nChanged in version 1.3.7: &#8211; le use_batch_mode    flag has been superseded\nby a new parameter executemany_mode    which provides support both for\npsycopg2’s execute_batch    helper as well as the execute_values\nhelper.\n\nPossible options for executemany_mode    include:\n\n\nNone    &#8211; By default, psycopg2’s extensions are not used, and the usual\ncursor.executemany()    method is used when invoking batches of statements.\n\n\n&#39;batch&#39;    &#8211; Uses psycopg2.extras.execute_batch    so that multiple copies\nof a SQL query, each one corresponding to a parameter set passed to\nexecutemany(), are joined into a single SQL string separated by a\nsemicolon.   This is the same behavior as was provided by the\nuse_batch_mode=True    flag.\n\n\n&#39;values&#39;&#8211; For Core insert()    constructs only (including those\nemitted by the ORM automatically), the psycopg2.extras.execute_values\nextension is used so that multiple parameter sets are grouped into a single\nINSERT statement and joined together with multiple VALUES expressions. Cette\nmethod requires that the string text of the VALUES clause inside the\nINSERT statement is manipulated, so is only supported with a compiled\ninsert()    construct where the format is predictable.  For all other\nconstructs,  including plain textual INSERT statements not rendered  by the\nSQLAlchemy expression language compiler, the\npsycopg2.extras.execute_batch        method is used.   It is therefore important\nto note that “values” mode implies that “batch” mode is also used for\nall statements for which “values” mode does not apply.\n\n\nFor both strategies, the executemany_batch_page_size    et\nexecutemany_values_page_size    arguments control how many parameter sets\nshould be represented in each execution.  Because “values” mode implies a\nfallback down to “batch” mode for non-INSERT statements, there are two\nindependent page size arguments.  For each, the default value of None    means\nto use psycopg2’s defaults, which at the time of this writing are quite low at\n100.   For the execute_values    method, a number as high as 10000 may prove\nto be performant, whereas for execute_batch, as the number represents\nfull statements repeated, a number closer to the default of 100 is likely\nmore appropriate:\n\n\nengine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;values&#39;,\n    executemany_values_page_size=10000, executemany_batch_page_size=500)\n\n\n\nChanged in version 1.3.7: &#8211; Added support for\npsycopg2.extras.execute_values. le use_batch_mode    flag is\nsuperseded by the executemany_mode    flag.\n\n\n\n\nUnicode with Psycopg2\nBy default, the psycopg2 driver uses the psycopg2.extensions.UNICODE\nextension, such that the DBAPI receives and returns all strings as Python\nUnicode objects directly &#8211; SQLAlchemy passes these values through without\nchange.   Psycopg2 here will encode/decode string values based on the\ncurrent “client encoding” setting; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf8, as a more useful default:\n\n\n# postgresql.conf file\n\n# client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8\n\n\nA second way to affect the client encoding is to set it within Psycopg2\nlocally.   SQLAlchemy will call psycopg2’s\npsycopg2:connection.set_client_encoding()    method\non all new connections based on the value passed to\ncreate_engine()    using the client_encoding    parameter:\n\n\n# set_client_encoding() setting;\n# works for *all* PostgreSQL versions\nengine = create_engine(&quot;postgresql://user:pass@host/dbname&quot;,\n                       client_encoding=&#39;utf8&#39;)\n\n\nThis overrides the encoding specified in the PostgreSQL client configuration.\nWhen using the parameter in this way, the psycopg2 driver emits\nSET client_encoding TO &#39;utf8&#39;    on the connection explicitly, and works\nin all PostgreSQL versions.\nNote that the client_encoding    setting as passed to create_engine()\nest not the same as the more recently added client_encoding    parameter\nnow supported by libpq directly.   This is enabled when client_encoding\nis passed directly to psycopg2.connect(), and from SQLAlchemy is passed\nusing the create_engine.connect_args    parameter:\n\n\nengine = create_engine(\n    &quot;postgresql://user:pass@host/dbname&quot;,\n    connect_args=&#39;client_encoding&#39;: &#39;utf8&#39;)\n\n# using the query string is equivalent\nengine = create_engine(&quot;postgresql://user:pass@host/dbname?client_encoding=utf8&quot;)\n\n\nThe above parameter was only added to libpq as of version 9.1 of PostgreSQL,\nso using the previous method is better for cross-version support.\n\n\nDisabling Native Unicode\nSQLAlchemy can also be instructed to skip the usage of the psycopg2\nUNICODE    extension and to instead utilize its own unicode encode/decode\nservices, which are normally reserved only for those DBAPIs that don’t\nfully support unicode directly.  Passing use_native_unicode=False    à\ncreate_engine()    will disable usage of psycopg2.extensions.UNICODE.\nSQLAlchemy will instead encode data itself into Python bytestrings on the way\nin and coerce from bytes on the way back,\nusing the value of the create_engine() encoding    parameter, which\ndefaults to utf-8.\nSQLAlchemy’s own unicode encode/decode functionality is steadily becoming\nobsolete as most DBAPIs now support unicode fully.\n\n\n\nBound Parameter Styles\nThe default parameter style for the psycopg2 dialect is “pyformat”, where\nSQL is rendered using %(paramname)s    style.   This format has the limitation\nthat it does not accommodate the unusual case of parameter names that\nactually contain percent or parenthesis symbols; as SQLAlchemy in many cases\ngenerates bound parameter names based on the name of a column, the presence\nof these characters in a column name can lead to problems.\nThere are two solutions to the issue of a schema.Column    that contains\none of these characters in its name.  One is to specify the\nschema.Column.key    for columns that have such names:\n\n\nmeasurement = Table(&#39;measurement&#39;, metadata,\n    Column(&#39;Size (meters)&#39;, Integer, clé=&#39;size_meters&#39;)\n)\n\n\nAbove, an INSERT statement such as measurement.insert()    will use\nsize_meters    as the parameter name, and a SQL expression such as\nmeasurement.c.size_meters &gt; dix    will derive the bound parameter name\nfrom the size_meters    key as well.\n\nChanged in version 1.0.0: &#8211; SQL expressions will use Column.key\nas the source of naming when anonymous bound parameters are created\nin SQL expressions; previously, this behavior only applied to\nTable.insert()    et Table.update()    parameter names.\n\nThe other solution is to use a positional format; psycopg2 allows use of the\n“format” paramstyle, which can be passed to\ncreate_engine.paramstyle:\n\n\nengine = create_engine(\n    &#39;postgresql://scott:tiger@localhost:5432/test&#39;, paramstyle=&#39;format&#39;)\n\n\nWith the above engine, instead of a statement like:\n\n\nINSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%(Taille (meters))s)\n&#39;Size (meters)&#39;: 1\n\n\nwe instead see:\n\n\nINSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%s)\n(1, )\n\n\nWhere above, the dictionary style is converted into a tuple with positional\nstyle.\n\n\nTransactions\nThe psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.\n\n\n\nPsycopg2 Transaction Isolation Level\nAs discussed in Transaction Isolation Level,\nall PostgreSQL dialects support setting of transaction isolation level\nboth via the isolation_level    parameter passed to create_engine(),\nas well as the isolation_level    argument used by\nConnection.execution_options(). When using the psycopg2 dialect, these\noptions make use of psycopg2’s set_isolation_level()    connection method,\nrather than emitting a PostgreSQL directive; this is because psycopg2’s\nAPI-level setting is always emitted at the start of each transaction in any\nCas.\nThe psycopg2 dialect supports these constants for isolation level:\n\n\nREAD COMMITTED\n\n\nREAD UNCOMMITTED\n\n\nREPEATABLE READ\n\n\nSERIALIZABLE\n\n\nAUTOCOMMIT\n\n\n\n\nNOTICE logging\nThe psycopg2 dialect will log PostgreSQL NOTICE messages\nvia the sqlalchemy.dialects.postgresql    logger.  When this logger\nis set to the logging.INFO    level, notice messages will be logged:\n\n\nimportation logging\n\nlogging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)\n\n\nAbove, it is assumed that logging is configured externally.  If this is not\nthe case, configuration such as logging.basicConfig()    must be utilized:\n\n\nimportation logging\n\nlogging.basicConfig()   # log messages to stdout\nlogging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)\n\n\n\n\n\nHSTORE type\nle psycopg2    DBAPI includes an extension to natively handle marshalling of\nthe HSTORE type.   The SQLAlchemy psycopg2 dialect will enable this extension\nby default when psycopg2 version 2.4 or greater is used, and\nit is detected that the target database has the HSTORE type set up for use.\nIn other words, when the dialect makes the first\nconnection, a sequence like the following is performed:\n\n\nRequest the available HSTORE oids using\npsycopg2.extras.HstoreAdapter.get_oids().\nIf this function returns a list of HSTORE identifiers, we then determine\nque le HSTORE    extension is present.\nThis function is skipped if the version of psycopg2 installed is\nless than version 2.4.\n\n\nIf the use_native_hstore    flag is at its default of True, et\nwe’ve detected that HSTORE    oids are available, the\npsycopg2.extensions.register_hstore()    extension is invoked for all\nles liaisons.\n\n\nle register_hstore()    extension has the effect of all Python\ndictionaries being accepted as parameters regardless of the type of target\ncolumn in SQL. The dictionaries are converted by this extension into a\ntextual HSTORE expression.  If this behavior is not desired, disable the\nuse of the hstore extension by setting use_native_hstore    à Faux    comme\nfollows:\n\n\nengine = create_engine(&quot;postgresql+psycopg2://scott:tiger@localhost/test&quot;,\n            use_native_hstore=Faux)\n\n\nle HSTORE    type is still supported when the\npsycopg2.extensions.register_hstore()    extension is not used.  It merely\nmeans that the coercion between Python dictionaries and the HSTORE\nstring format, on both the parameter side and the result side, will take\nplace within SQLAlchemy’s own marshalling logic, and not that of psycopg2\nwhich may be more performant.\n\n\n\n\npg8000\nSupport for the PostgreSQL database via the pg8000 driver.\n\nDBAPI\nDocumentation and download information (if applicable) for pg8000 is available at:\nhttps://pythonhosted.org/pg8000/\n\n\nConnecting\nConnect String:\n\n\npostgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]\n\n\n\n\nRemarque\nThe pg8000 dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.\n\n\n\nUnicode\npg8000 will encode / decode string values between it and the server using the\nPostgreSQL client_encoding    parameter; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf-8, as a more useful default:\n\n\n#client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8\n\n\nle client_encoding    can be overridden for a session by executing the SQL:\nSET CLIENT_ENCODING TO ‘utf8’;\nSQLAlchemy will execute this SQL on all new connections based on the value\npassed to create_engine()    using the client_encoding    parameter:\n\n\nengine = create_engine(\n    &quot;postgresql+pg8000://user:pass@host/dbname&quot;, client_encoding=&#39;utf8&#39;)\n\n\n\n\n\npg8000 Transaction Isolation Level\nThe pg8000 dialect offers the same isolation level settings as that\nof the psycopg2 dialect:\n\n\nREAD COMMITTED\n\n\nREAD UNCOMMITTED\n\n\nREPEATABLE READ\n\n\nSERIALIZABLE\n\n\nAUTOCOMMIT\n\n\n\nNew in version 0.9.5: support for AUTOCOMMIT isolation level when using\npg8000.\n\n\n\n\n\npsycopg2cffi\nSupport for the PostgreSQL database via the psycopg2cffi driver.\n\nDBAPI\nDocumentation and download information (if applicable) for psycopg2cffi is available at:\nhttp://pypi.python.org/pypi/psycopg2cffi/\n\n\nConnecting\nConnect String:\n\n\npostgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]\n\n\n\npsycopg2cffi    is an adaptation of psycopg2, using CFFI for the C\ncouche. This makes it suitable for use in e.g. PyPy. Documentation\nis as per psycopg2.\n\n\n\npy-postgresql\nSupport for the PostgreSQL database via the py-postgresql driver.\n\nDBAPI\nDocumentation and download information (if applicable) for py-postgresql is available at:\nhttp://python.projects.pgfoundry.org/\n\n\nConnecting\nConnect String:\n\n\npostgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]\n\n\n\n\nRemarque\nThe pypostgresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndriver is psycopg2.\n\n\n\n\npygresql\nSupport for the PostgreSQL database via the pygresql driver.\n\nDBAPI\nDocumentation and download information (if applicable) for pygresql is available at:\nhttp://www.pygresql.org/\n\n\nConnecting\nConnect String:\n\n\npostgresql+pygresql://user:password@host:port/dbname[?key=value&key=value...]\n\n\n\n\nRemarque\nThe pygresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.\n\n\n\n\nzxjdbc\nSupport for the PostgreSQL database via the zxJDBC for Jython driver.\n\nDBAPI\nDrivers for this database are available at:\nhttp://jdbc.postgresql.org/\n\n\nConnecting\nConnect String:\n\n\npostgresql+zxjdbc://scott:tiger@localhost/db\n\n\n\n\n\nClick to rate this post!\r\n                                   \r\n                               [Total: 0  Average: 0]","paragraphs":["Prise en charge de la base de données PostgreSQL.\nPrise en charge DBAPI\nLes options dialect / DBAPI suivantes sont disponibles. Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion.","Séquences / SERIAL / IDENTITY\nPostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut\nde créer de nouvelles valeurs de clé primaire pour les colonnes de clé primaire basées sur des nombres entiers. Quand\ncréer des tables, SQLAlchemy va publier le EN SÉRIE    type de données pour\ncolonnes de clé primaire basées sur des nombres entiers, qui génèrent une séquence et un côté serveur\ndéfaut correspondant à la colonne.\nPour spécifier une séquence nommée spécifique à utiliser pour la génération de clé primaire,\nUtilisez le Séquence()    construction:","Table(&#39;quelque chose&#39;, métadonnées,\n        Colonne(&#39;id&#39;, Entier, Séquence(&#39;some_id_seq&#39;), clé primaire=Vrai)\n    )","Lorsque SQLAlchemy émet une seule instruction INSERT, pour remplir le contrat de\nayant le &quot;dernier identifiant d&#39;insertion&quot; disponible, une clause RETURNING est ajoutée à\nl&#39;instruction INSERT qui spécifie les colonnes de clé primaire doit être\nretourné une fois la déclaration terminée. La fonctionnalité RETURNING ne prend que\nplace si PostgreSQL 8.2 ou version ultérieure est utilisé. Dans une approche de repli, le\nséquence, spécifiée explicitement ou implicitement via EN SÉRIE, est\npréalablement exécutée indépendamment, la valeur renvoyée à utiliser dans la\ninsertion ultérieure. Notez que lorsqu&#39;un\ninsérer()    la construction est exécutée en utilisant\nSémantique «executemany», la fonctionnalité «dernier identifiant inséré» ne\nappliquer; aucune clause RETURNING n’est émise et la séquence n’a pas été pré-exécutée dans cette\nCas.\nPour forcer l&#39;utilisation de RETURNING par défaut, spécifiez l&#39;indicateur.\nimplicit_returning = False    à create_engine ().","Colonnes PostgreSQL 10 IDENTITY\nPostgreSQL 10 a une nouvelle fonctionnalité IDENTITY qui remplace l’utilisation de SERIAL.\nLe support intégré pour le rendu de IDENTITY n’est pas encore disponible, mais le\nle crochet de compilation suivant peut être utilisé pour remplacer les occurrences de SERIAL par\nIDENTITÉ:","de sqlalchemy.schema importation CreateColumn\nde sqlalchemy.ext.compiler importation compile","@compiles(CreateColumn, &#39;postgresql&#39;)\ndef use_identity(élément, compilateur, **kw):\n    texte = compilateur.visit_create_column(élément, **kw)\n    texte = texte.remplacer(&quot;EN SÉRIE&quot;, &quot;INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ&quot;)\n    revenir texte","En utilisant ce qui précède, un tableau tel que:","t = Table(\n    &#39;t&#39;, m,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, Chaîne)\n)","Générera sur la base de données de sauvegarde en tant que:","CRÉER TABLE t (\n    identifiant INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ NE PAS NUL,\n    Les données VARCHAR,\n    PRIMAIRE CLÉ (identifiant)\n)","Niveau d&#39;isolation de la transaction\nTous les dialectes PostgreSQL supportent la définition du niveau d&#39;isolation des transactions\nà la fois via un paramètre spécifique au dialecte\ncreate_engine.isolation_level    accepté par create_engine (),\naussi bien que Connection.execution_options.isolation_level\nargument passé à Connection.execution_options ().\nLors de l’utilisation d’un dialecte autre que psycopg2, cette fonction fonctionne en lançant la commande\nENSEMBLE SESSION LES CARACTÉRISTIQUES COMME TRANSACTION ISOLEMENT NIVEAU     pour\nchaque nouvelle connexion. Pour le niveau d&#39;isolement AUTOCOMMIT spécial,\nDes techniques spécifiques à DBAPI sont utilisées.\nPour définir le niveau d&#39;isolement à l&#39;aide de create_engine ():","moteur = create_engine(\n    &quot;postgresql + pg8000: // scott: tiger @ localhost / test&quot;,\n    niveau_isolement=&quot;READ UNCOMMITTED&quot;\n)","Pour définir à l&#39;aide des options d&#39;exécution par connexion:","lien = moteur.relier()\nlien = lien.execution_options(\n    niveau_isolement=&quot;LIRE ENGAGÉ&quot;\n)","Valeurs valides pour niveau_isolement    comprendre:","Introspection de la table de schémas distants et chemin de recherche PostgreSQL\nTL; DR;: garder le chemin_recherche    variable définie à sa valeur par défaut de Publique,\nnommer des schémas autre que Publique    explicitement dans Table    définitions.\nLe dialecte PostgreSQL peut refléter les tables de n’importe quel schéma. le\nTable.schema    argument, ou bien la\nMetaData.reflect.schema    l&#39;argument détermine quel schéma sera\nêtre recherché pour la ou les tables. Le reflété Table    objets\nconservera dans tous les cas cette .schéma    attribut comme spécifié.\nCependant, en ce qui concerne les tableaux que ces Table    les objets font référence à\nvia une contrainte de clé étrangère, une décision doit être prise quant à la .schéma\nest représenté dans ces tables distantes, dans le cas où cette distance\nnom de schéma est également un membre du courant\nChemin de recherche PostgreSQL.\nPar défaut, le dialecte PostgreSQL reproduit le comportement encouragé par\nPostgreSQL propre pg_get_constraintdef ()    procédure intégrée. Cette fonction\nrenvoie un exemple de définition pour une contrainte de clé étrangère particulière,\nomettant le nom de schéma référencé de cette définition lorsque le nom est\négalement dans le chemin de recherche du schéma PostgreSQL. L&#39;interaction ci-dessous\nillustre ce comportement:","tester=&gt; CRÉER TABLE test_schema.référé(identifiant ENTIER PRIMAIRE CLÉ)\nCRÉER TABLE\ntester=&gt; CRÉER TABLE référant(\ntester(&gt;         identifiant ENTIER PRIMAIRE CLÉ,\ntester(&gt;         id_référé ENTIER RÉFÉRENCES test_schema.référé(identifiant));\nCRÉER TABLE\ntester=&gt; ENSEMBLE chemin_recherche À Publique, test_schema;\ntester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;\ntester-&gt; ;\n               pg_get_constraintdef\n-------------------------------------------------- -\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES référé(identifiant)\n(1 rangée)","Ci-dessus, nous avons créé une table référé    en tant que membre du schéma distant\ntest_schemaCependant, lorsque nous avons ajouté test_schema    à la\nPG chemin_recherche    et ensuite demandé pg_get_constraintdef ()    pour le\nÉTRANGER CLÉ    syntaxe, test_schema    n&#39;a pas été inclus dans la sortie de\nla fonction.\nD&#39;autre part, si nous redéfinissons le chemin de recherche sur la valeur par défaut typique\nde Publique:","tester=&gt; ENSEMBLE chemin_recherche À Publique;\nENSEMBLE","La même requête contre pg_get_constraintdef ()    retourne maintenant complètement\nnom qualifié du schéma pour nous:","tester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;;\n                     pg_get_constraintdef\n-------------------------------------------------- -------------\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES test_schema.référé(identifiant)\n(1 rangée)","SQLAlchemy utilisera par défaut la valeur de retour de pg_get_constraintdef ()\nafin de déterminer le nom du schéma distant. C’est-à-dire si notre chemin_recherche\nont été mis à inclure test_schemaet nous avons invoqué une table\nprocessus de réflexion comme suit:","&gt;&gt;&gt; de sqlalchemy importation Table, MetaData, create_engine\n&gt;&gt;&gt; moteur = create_engine(&quot;postgresql: // scott: tiger @ localhost / test&quot;)\n&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta,\n...                       chargement automatique=Vrai, autoload_with=Connecticut)\n...","Le processus ci-dessus fournirait à la MetaData.tables    collection\nréféré    table nommée sans pour autant le schéma:","&gt;&gt;&gt; méta.les tables[[[[&#39;référé&#39;].schéma est Aucun\nVrai","Pour modifier le comportement de la réflexion de sorte que le schéma référencé soit\nmaintenu indépendamment de la chemin_recherche    réglage, utilisez le\npostgresql_ignore_search_path    option, qui peut être spécifiée en tant que\nargument spécifique au dialecte à la fois Table    aussi bien que\nMetaData.reflect ():","&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta, chargement automatique=Vrai,\n...                       autoload_with=Connecticut,\n...                       postgresql_ignore_search_path=Vrai)\n...","Nous allons maintenant avoir test_schema.referred    stocké comme qualifié de schéma:","&gt;&gt;&gt; méta.les tables[[[[&#39;test_schema.referred&#39;].schéma\n&#39;test_schema&#39;","Notez que dans tous les cas, le schéma «par défaut» est toujours reflété comme\nAucun. Le schéma «par défaut» sur PostgreSQL est celui qui est renvoyé par le\nPostgreSQL current_schema ()    une fonction. Sur un PostgreSQL typique\nl&#39;installation, c&#39;est le nom Publique. Donc, un tableau qui fait référence à un autre\nqui est dans le Publique    (c&#39;est-à-dire par défaut) le schéma aura toujours le\n.schéma    attribut mis à Aucun.","Nouveau dans la version 0.9.2: Ajouté le postgresql_ignore_search_path\noption dialecte acceptée par Table    et\nMetaData.reflect ().","INSERT / UPDATE… RETOURNER\nLe dialecte supporte les PG 8.2 INSERT..RECLINANT, MISE À JOUR..RECLINANT    et\nSUPPRIMER .. RETOURNER    syntaxes.   INSERT..RECLINANT    est utilisé par défaut\npour les instructions INSERT à une seule ligne afin d&#39;extraire les données nouvellement générées\nidentificateurs de clé primaire. Pour spécifier un explicite RETOUR    clause,\nUtilisez le _UpdateBase.returning ()    méthode par déclaration:","# INSERT..RETURNING\nrésultat = table.insérer().rentrant(table.c.col1, table.c.col2).\n    valeurs(Nom=&#39;foo&#39;)\nimpression résultat.fetchall()","# UPDATE..RETURNING\nrésultat = table.mise à jour().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;).valeurs(Nom=&#39;bar&#39;)\nimpression résultat.fetchall()","# DELETE..RETURNING\nrésultat = table.effacer().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;)\nimpression résultat.fetchall()","INSERT… SUR CONFLICT (Upsert)\nA partir de la version 9.5, PostgreSQL permet les «upserts» (mises à jour ou insert) de\nlignes dans une table via le SUR CONFLIT    clause de la INSÉRER    déclaration. UNE\nLa ligne candidate ne sera insérée que si cette ligne ne viole aucun code unique.\ncontraintes. Dans le cas d’une violation de contrainte unique, une action secondaire\npeut être soit “DO UPDATE”, indiquant que les données dans le fichier\nla ligne cible doit être mise à jour, ou “NE RIEN FAIRE”, ce qui indique de sauter silencieusement\ncette rangée.\nLes conflits sont déterminés à l&#39;aide de contraintes et d&#39;index uniques existants. Celles-ci\nles contraintes peuvent être identifiées en utilisant leur nom comme indiqué dans DDL,\nou ils peuvent être inféré en indiquant les colonnes et les conditions qui composent\nles index.\nSQLAlchemy fournit SUR CONFLIT    support via le spécifique PostgreSQL\npostgresql.dml.insert ()    fonction, qui fournit\nles méthodes génératives on_conflict_do_update ()\net on_conflict_do_nothing ():","de sqlalchemy.dialects.postgresql importation insérer","insert_stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_existing_id&#39;,\n    Les données=&#39;valeur insérée&#39;)","do_nothing_stmt = insert_stmt.on_conflict_do_nothing(\n    éléments_index=[[[[&#39;id&#39;]\n)","Connecticut.exécuter(do_nothing_stmt)","do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;pk_my_table&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","Connecticut.exécuter(do_update_stmt)","Les deux méthodes fournissent la &quot;cible&quot; du conflit en utilisant soit la\ncontrainte nommée ou par inférence de colonne:","le Insert.on_conflict_do_update.index_elements    argument\nspécifie une séquence contenant des noms de colonne de chaîne, Colonne\ndes objets, et / ou des éléments d’expression SQL, qui identifieraient un unique\nindice:","do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.identifiant],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","Lors de l&#39;utilisation Insert.on_conflict_do_update.index_elements    à\ndéduire un index, un index partiel peut être déduit en spécifiant également le\nUtilisez le Insert.on_conflict_do_update.index_where    paramètre:","de sqlalchemy.dialects.postgresql importation insérer","stmt = insérer(ma table).valeurs(utilisateur_email=&#39;a@b.com&#39;, Les données=&#39;données insérées&#39;)\nstmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.utilisateur_email],\n    index_where=ma table.c.utilisateur_email.comme(&#39;%@gmail.com&#39;),\n    ensemble_=dict(Les données=stmt.exclu.Les données)\n    )\nConnecticut.exécuter(stmt)","le Insert.on_conflict_do_update.constraint    l&#39;argument est\nutilisé pour spécifier directement un index plutôt que de l&#39;inférer. Cela peut être\nle nom d&#39;une contrainte UNIQUE, d&#39;une contrainte PRIMARY KEY ou d&#39;un INDEX:","do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_idx_1&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_pk&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","le Insert.on_conflict_do_update.constraint    argument peut\nse référer également à une construction SQLAlchemy représentant une contrainte,\npar exemple. Contrainte unique, PrimaryKeyConstraint,\nIndice, ou ExcludeConstraint. Dans cette utilisation,\nsi la contrainte a un nom, elle est utilisée directement. Sinon, si le\ncontrainte est non nommée, alors l’inférence sera utilisée, où les expressions\net la clause optionnelle WHERE de la contrainte sera précisée dans le\nconstruction. Cette utilisation est particulièrement pratique\nfaire référence à la clé primaire nommée ou non nommée d&#39;un Table    en utilisant le\nTable.primary_key    attribut:","do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=ma table.clé primaire,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","SUR CONFLIT ... FAIRE MISE À JOUR    est utilisé pour effectuer une mise à jour du déjà\nligne existante, en utilisant n&#39;importe quelle combinaison de nouvelles valeurs ainsi que de valeurs\nde l&#39;insertion proposée. Ces valeurs sont spécifiées à l&#39;aide du\nInsert.on_conflict_do_update.set_    paramètre. Cette\nparamètre accepte un dictionnaire composé de valeurs directes\npour UPDATE:","de sqlalchemy.dialects.postgresql importation insérer","stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n    )\nConnecticut.exécuter(do_update_stmt)","Pour faire référence à la ligne d’insertion proposée, l’alias spécial\nexclu    est disponible en tant qu&#39;attribut sur\nle postgresql.dml.Insert    objet; cet objet est un\nColumnCollection    lequel alias contient toutes les colonnes de la cible\ntable:","de sqlalchemy.dialects.postgresql importation insérer","stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    )\nConnecticut.exécuter(do_update_stmt)","le Insert.on_conflict_do_update ()    méthode accepte également\nune clause WHERE utilisant le Insert.on_conflict_do_update.where\nparamètre, qui limitera les lignes qui reçoivent un UPDATE:","de sqlalchemy.dialects.postgresql importation insérer","stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\non_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    où=(ma table.c.statut == 2)\n    )\nConnecticut.exécuter(on_update_stmt)","SUR CONFLIT    peut également être utilisé pour ignorer l&#39;insertion complète d&#39;une ligne\nen cas de conflit avec une contrainte unique ou d&#39;exclusion; au dessous de\nceci est illustré en utilisant le\non_conflict_do_nothing ()    méthode:","de sqlalchemy.dialects.postgresql importation insérer","stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing(éléments_index=[[[[&#39;id&#39;])\nConnecticut.exécuter(stmt)","Si FAIRE RIEN    est utilisé sans spécifier de colonne ou de contrainte,\nil a pour effet de sauter l&#39;INSERT pour toute exception unique ou d&#39;exclusion\nviolation de contrainte qui se produit:","de sqlalchemy.dialects.postgresql importation insérer","stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing()\nConnecticut.exécuter(stmt)","Nouveau dans la version 1.1: Ajout du support pour les clauses PostgreSQL ™ ON CONFLICT","Recherche en texte intégral\nSQLAlchemy met à disposition le PostgreSQL @@    opérateur via le\nColumnElement.match ()    méthode sur toute expression de colonne textuelle.\nSur un dialecte PostgreSQL, une expression comme celle-ci:","sélectionner([[[[quelque chose.c.texte.rencontre(&quot;chaîne de recherche&quot;)])","émettra dans la base de données:","SÉLECTIONNER texte @@ to_tsquery(&#39;chaîne de recherche&#39;) DE table","Les fonctions de recherche de texte PostgreSQL telles que to_tsquery ()\net to_tsvector ()    sont disponibles\nen utilisant explicitement la norme func    construction. Par exemple:","sélectionner([[[[\n    func.to_tsvector(&quot;les gros chats mangeaient des rats&quot;).rencontre(&#39;chat et rat&#39;)\n])","Emet l&#39;équivalent de:","SÉLECTIONNER to_tsvector(&quot;les gros chats mangeaient des rats&quot;) @@ to_tsquery(&#39;chat et rat&#39;)","le postgresql.TSVECTOR    type peut fournir des CAST explicites:","de sqlalchemy.dialects.postgresql importation TSVECTOR\nde sqlalchemy importation sélectionner, jeter\nsélectionner([[[[jeter(&quot;Du texte&quot;, TSVECTOR)])","produit une déclaration équivalente à:","SÉLECTIONNER JETER(&#39;Du texte&#39; COMME TSVECTOR) COMME anon_1","Les recherches en texte intégral dans PostgreSQL sont influencées par la combinaison de:\nParamètre PostgreSQL de default_text_search_config, le regconfig    utilisé\npour construire les index GIN / GiST, et le regconfig    éventuellement passé\nlors d&#39;une requête.\nLorsque vous effectuez une recherche en texte intégral sur une colonne comportant un code GIN ou\nIndex GiST déjà pré-calculé (qui est commun au texte intégral\nrecherches), il peut être nécessaire de passer explicitement à un serveur PostgreSQL spécifique.\nregconfig    valeur pour assurer que le planificateur de requêtes utilise l&#39;index et\nne pas recalculer la colonne à la demande.\nAfin de permettre cette planification explicite des requêtes, ou d’utiliser différentes\nstratégies de recherche, la rencontre    méthode accepte un postgresql_regconfig\nargument de mot clé:","sélectionner([[[[ma table.c.identifiant]).où(\n    ma table.c.Titre.rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n)","Emet l&#39;équivalent de:","SÉLECTIONNER ma table.identifiant DE ma table\nOÙ ma table.Titre @@ to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)","On peut aussi spécifiquement passer dans un «Regconfig» valeur à la\nto_tsvector ()    commande comme argument initial:","sélectionner([[[[ma table.c.identifiant]).où(\n        func.to_tsvector(&#39;Anglais&#39;, ma table.c.Titre )\n        .rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n    )","produit une déclaration équivalente à:","SÉLECTIONNER ma table.identifiant DE ma table\nOÙ to_tsvector(&#39;Anglais&#39;, ma table.Titre) @@\n    to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)","Il est recommandé d’utiliser le EXPLIQUE ANALYSER...    outil de\nPostgreSQL ™ pour vous assurer que vous générez des requêtes avec SQLAlchemy qui\nTirez pleinement parti des index que vous avez éventuellement créés pour la recherche en texte intégral.","DE SEULEMENT…\nLe dialecte prend en charge le mot clé ONLY de PostgreSQL pour ne cibler que certains\ntable dans une hiérarchie d&#39;héritage. Ceci peut être utilisé pour produire le\nSÉLECTIONNER ... DE SEULEMENT, MISE À JOUR SEULEMENT ..., et EFFACER DE SEULEMENT ...\nsyntaxes. Il utilise le mécanisme des astuces de SQLAlchemy:","# SELECTIONNER ... A PARTIR DE ...\nrésultat = table.sélectionner().avec_hint(table, &#39;SEULEMENT&#39;, &#39;postgresql&#39;)\nimpression résultat.fetchall()","# MISE À JOUR UNIQUEMENT ...\ntable.mise à jour(valeurs=dict(foo=&#39;bar&#39;)).avec_hint(&#39;SEULEMENT&#39;,\n                                               nom du dialecte=&#39;postgresql&#39;)","# SUPPRIMER DE SEULEMENT ...\ntable.effacer().avec_hint(&#39;SEULEMENT&#39;, nom du dialecte=&#39;postgresql&#39;)","Options d&#39;index spécifiques à PostgreSQL\nPlusieurs extensions à la Indice    construct sont disponibles, spécifiques\nau dialecte PostgreSQL.","Index partiels\nLes index partiels ajoutent un critère à la définition de l’index afin que celui-ci soit\nappliqué à un sous-ensemble de lignes. Ceux-ci peuvent être spécifiés sur Indice\nen utilisant le postgresql_where    argument de mot clé:","Indice(&#39;mon_index&#39;, ma table.c.identifiant, postgresql_where=ma table.c.valeur &gt; dix)","Classes d&#39;opérateurs\nPostgreSQL permet la spécification d’un classe d&#39;opérateur pour chaque colonne de\nun index (voir\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).\nle Indice    la construction permet de les spécifier via le\npostgresql_ops    argument de mot clé:","Indice(\n    &#39;mon_index&#39;, ma table.c.identifiant, ma table.c.Les données,\n    postgresql_ops=\n        &#39;Les données&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )","Notez que les clés dans le postgresql_ops    dictionnaire sont le nom “clé” de\nle Colonnec&#39;est-à-dire le nom utilisé pour y accéder depuis le .c\ncollection de Table, qui peut être configuré pour être différent de\nle nom réel de la colonne tel qu&#39;il est exprimé dans la base de données.\nSi postgresql_ops    doit être utilisé contre une expression SQL complexe telle que\nen tant qu&#39;appel de fonction, pour l&#39;appliquer à la colonne, il faut lui attribuer une étiquette\nqui est identifié dans le dictionnaire par son nom, par exemple:","Indice(\n    &#39;mon_index&#39;, ma table.c.identifiant,\n    func.inférieur(ma table.c.Les données).étiquette(&#39;data_lower&#39;),\n    postgresql_ops=\n        &#39;data_lower&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )","Types d&#39;index\nPostgreSQL fournit plusieurs types d’index: B-Tree, Hash, GiST et GIN, ainsi que\ncomme la possibilité pour les utilisateurs de créer leurs propres projets (voir\nhttp://www.postgresql.org/docs/8.3/static/indexes-types.html). Ceux-ci peuvent être\nspécifié sur Indice    en utilisant le postgresql_using    argument de mot clé:","Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_using=&#39;Gin&#39;)","La valeur transmise à l’argument du mot clé sera simplement transmise au\ncommande CREATE INDEX sous-jacente, de sorte doit être un type d&#39;index valide pour votre\nversion de PostgreSQL.","Paramètres de stockage d&#39;index\nPostgreSQL permet de définir des paramètres de stockage sur des index. Le stockage\nles paramètres disponibles dépendent de la méthode d&#39;index utilisée par l&#39;index. Espace de rangement\nles paramètres peuvent être spécifiés sur Indice    en utilisant le postgresql_with\nargument de mot clé:","Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_with=&quot;facteur de remplissage&quot;: 50)","PostgreSQL permet de définir le tablespace dans lequel créer l&#39;index.\nLe tablespace peut être spécifié sur Indice    en utilisant le\npostgresql_tablespace    argument de mot clé:","Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_tablespace=&#39;mon espace de tables&#39;)","Notez que la même option est disponible sur Table    ainsi que.","Index avec concurremment\nL’option d’index CONCORDREMENT de PostgreSQL est supportée en passant le\ndrapeau postgresql_concurrently    à la Indice    construction:","tbl = Table(&#39;testtbl&#39;, m, Colonne(&#39;Les données&#39;, Entier))","idx1 = Indice(&#39;test_idx1&#39;, tbl.c.Les données, postgresql_concurrently=Vrai)","La construction d’index ci-dessus rendra le DDL pour CREATE INDEX, en supposant que\nPostgreSQL 8.2 ou supérieur est détecté ou pour un dialecte sans connexion, comme:","CRÉER INDICE De manière concurrente test_idx1 SUR testtbl (Les données)","Pour DROP INDEX, en supposant que PostgreSQL 9.2 ou supérieur soit détecté ou pour\nun dialecte sans connexion, il émettra:","LAISSEZ TOMBER INDICE De manière concurrente test_idx1","Nouveau dans la version 1.1: support pour concurremment sur DROP INDEX. le\nLe mot clé est simultanément émis uniquement si une version suffisamment élevée\nde PostgreSQL est détecté sur la connexion (ou pour une connexion sans connexion)\ndialecte).","Lors de l&#39;utilisation concurrente, la base de données PostgreSQL requiert que l&#39;instruction\nêtre appelé en dehors d&#39;un bloc de transaction. La base de données Python DBAPI\nmême pour une seule déclaration, une transaction est présente, donc pour utiliser cette\nle mode «autocommit» de DBAPI doit être utilisé:","métadonnées = MetaData()\ntable = Table(\n    &quot;foo&quot;, métadonnées,\n    Colonne(&quot;id&quot;, Chaîne))\nindice = Indice(\n    &quot;foo_idx&quot;, table.c.identifiant, postgresql_concurrently=Vrai)","avec moteur.relier() comme Connecticut:\n    avec Connecticut.execution_options(niveau_isolement=&#39;AUTOCOMMIT&#39;):\n        table.créer(Connecticut)","PostgreSQL Index Reflection\nLa base de données PostgreSQL crée implicitement un INDEX UNIQUE chaque fois que le\nLa construction UNIQUE CONSTRAINT est utilisée. Lors de l&#39;inspection d&#39;une table en utilisant\nInspecteur, le Inspector.get_indexes ()\net le Inspector.get_unique_constraints ()    fera rapport sur ces\ndeux constructions distinctement; dans le cas de l&#39;index, la clé\nduplicates_constraint    sera présent dans l&#39;entrée d&#39;index s&#39;il est\ndétecté comme reflétant une contrainte. Lors de la réflexion en utilisant\nTable(..., autoload = True), l&#39;INDICE UNIQUE est ne pas revenu\ndans Table.indexes    quand il est détecté comme reflétant un\nContrainte unique    dans le Table.constraints    collection.","Modifié dans la version 1.0.0: &#8211; Table    la réflexion comprend maintenant\nContrainte unique    objets présents dans le Table.constraints\ncollection; le backend de PostgreSQL n&#39;inclura plus de “miroir”\nIndice    construire dans Table.indexes    si c&#39;est détecté\ncomme correspondant à une contrainte unique.","Options de réflexion spéciales\nle Inspecteur    utilisé pour le backend PostgreSQL est une instance\nde PGInspector, qui offre des méthodes supplémentaires:","de sqlalchemy importation create_engine, inspecter","moteur = create_engine(&quot;postgresql + psycopg2: // localhost / test&quot;)\ninsp = inspecter(moteur)  # sera un PGInspector","impression(insp.get_enums())","classe sqlalchemy.dialects.postgresql.base.PGInspector(Connecticut)","Bases: sqlalchemy.engine.reflection.Inspector","get_enums(schéma = Aucun)","Retourne une liste d&#39;objets ENUM.\nChaque membre est un dictionnaire contenant ces champs:","name &#8211; nom de l&#39;énum","schéma &#8211; le nom du schéma pour l&#39;énumération.","visible &#8211; booléen, que cette énumération soit visible ou non\ndans le chemin de recherche par défaut.","étiquettes &#8211; une liste d&#39;étiquettes de chaîne qui s&#39;appliquent à l&#39;énumération.","Paramètres","schéma &#8211; nom du schéma. Si aucun, le schéma par défaut\n(généralement «public») est utilisé. Peut également être réglé sur &#39;*&#39; pour\nindiquez des énumérations de charge pour tous les schémas.","get_foreign_table_names(schéma = Aucun)","Renvoie une liste de noms FOREIGN TABLE.\nLe comportement est similaire à celui de Inspector.get_table_names (),\nsauf que la liste est limitée aux tables qui signalent une\nrelâchement    valeur de F.","get_table_oid(nom de la table, schéma = Aucun)","Renvoie l&#39;OID du nom de la table donnée.","get_view_names(schéma = Aucun, include = (&#39;plain&#39;, &#39;matérialisé&#39;))","Renvoyer tous les noms de vue dans schéma.","Paramètres","schéma &#8211; Facultatif, récupérez les noms d&#39;un schéma autre que celui par défaut.\nPour les devis spéciaux, utilisez quoted_name.","comprendre &#8211; \nspécifier les types de vues à renvoyer. Passé\nsous forme de valeur de chaîne (pour un type unique) ou de tuple (pour un nombre quelconque)\nde types). Par défaut à (&#39;plaine&#39;, &#39;matérialisé&#39;).","Options de la table PostgreSQL\nPlusieurs options pour CREATE TABLE sont supportées directement par PostgreSQL\ndialecte en conjonction avec le Table    construction:","Types de tableau\nLe dialecte PostgreSQL supporte les tableaux, à la fois en tant que types de colonne multidimensionnels\nainsi que des littéraux de tableau:","Types JSON\nLe dialecte PostgreSQL prend en charge les types de données JSON et JSONB, y compris\nLe support natif de psycopg2 et celui de tous les logiciels spéciaux de PostgreSQL\nles opérateurs:","Type HSTORE\nLe type HSTORE PostgreSQL ainsi que les littéraux hstore sont pris en charge:","Types ENUM\nPostgreSQL a une structure TYPE pouvant être créée indépendamment qui est utilisée\npour implémenter un type énuméré. Cette approche introduit des\nla complexité du côté SQLAlchemy en termes de quand ce type devrait être\nCréé et abandonné. Le type object est aussi un reflet indépendant\nentité. Les sections suivantes doivent être consultées:","Utiliser ENUM avec ARRAY\nLa combinaison de ENUM et ARRAY n’est pas directement prise en charge par le backend\nDBAPIs à ce moment. Pour envoyer et recevoir un ARRAY of ENUM,\nutilisez le type de solution de contournement suivant, qui décore le\npostgresql.ARRAY    Type de données.","de sqlalchemy importation TypeDécorateur\nde sqlalchemy.dialects.postgresql importation Tableau","classe ArrayOfEnum(TypeDécorateur):\n    impl = Tableau","def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)","def result_processor(soi, dialecte, coltype):\n        super_rp = super(ArrayOfEnum, soi).result_processor(\n            dialecte, coltype)","def handle_raw_string(valeur):\n            interne = ré.rencontre(r&quot;^ (. *) $&quot;, valeur).groupe(1)\n            revenir interne.Divisé(&quot;,&quot;) si interne autre []","def processus(valeur):\n            si valeur est Aucun:\n                revenir Aucun\n            revenir super_rp(handle_raw_string(valeur))\n        revenir processus","Par exemple.:","Table(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, ArrayOfEnum(ENUM(&#39;une&#39;, &#39;b&#39;c&#39;, nom =&#39;myenum&#39;)))",")","Ce type n&#39;est pas inclus en tant que type intégré car il serait incompatible\navec une DBAPI qui décide soudainement de soutenir ARRAY of ENUM directement dans\nune nouvelle version.","Utilisation de JSON / JSONB avec ARRAY\nSemblable à utiliser ENUM, pour un ARRAY of JSON / JSONB, nous devons rendre le\nCAST approprié, cependant les pilotes psycopg2 actuels semblent gérer le résultat\npour ARRAY of JSON automatiquement, le type est donc plus simple:","classe CastingArray(Tableau):\n    def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)","Par exemple.:","Table(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, CastingArray(JSONB))\n)","Types de données PostgreSQL\nComme avec tous les dialectes SQLAlchemy, tous les types UPPERCASE connus pour être\nvalables avec PostgreSQL sont importables à partir du dialecte de niveau supérieur, que ce soit\nils proviennent de sqlalchemy.types    ou du dialecte local:","de sqlalchemy.dialects.postgresql importation \n    Tableau, BIGINT, BIT, BOOLÉAN, BYTEA, CARBONISER, CIDR, DATE, \n    DOUBLE PRECISION, ENUM, FLOTTE, HSTORE, INET, ENTIER, \n    INTERVALLE, JSON, JSONB, MACADDR, ARGENT, NUMERIC, OID, REAL, SMALLINT, TEXT, \n    TEMPS, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \n    DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR","Types which are specific to PostgreSQL, or have PostgreSQL-specific\nconstruction arguments, are as follows:","class sqlalchemy.dialects.postgresql.aggregate_order_by(cible, *order_by)","Bases: sqlalchemy.sql.expression.ColumnElement\nRepresent a PostgreSQL aggregate order by expression.\nE.g.:","de sqlalchemy.dialects.postgresql importation aggregate_order_by\nexpr = func.array_agg(aggregate_order_by(table.c.une, table.c.b.desc()))\nstmt = sélectionner([[[[expr])","would represent the expression:","SELECT array_agg(une ORDER BY b DESC) FROM table;","Similarly:","expr = func.string_agg(\n    table.c.une,\n    aggregate_order_by(literal_column(&quot;&#39;,&#39;&quot;), table.c.une)\n)\nstmt = sélectionner([[[[expr])","Would represent:","SELECT string_agg(une, &#39;,&#39; ORDER BY une) FROM table;","Changed in version 1.2.13: &#8211; the ORDER BY argument may be multiple terms","class sqlalchemy.dialects.postgresql.array(clauses, **kw)","Bases: sqlalchemy.sql.expression.Tuple\nA PostgreSQL ARRAY literal.\nThis is used to produce ARRAY literals in SQL expressions, e.g.:","de sqlalchemy.dialects.postgresql importation array\nde sqlalchemy.dialects importation postgresql\nde sqlalchemy importation sélectionner, func","stmt = sélectionner([[[[\n                array([[[[1,2]) + array([[[[3,4,5])\n            ])","impression(stmt.compile(dialect=postgresql.dialect()))","Produces the SQL:","SELECT ARRAY[[[[%(param_1)s, %(param_2)s] ||\n    ARRAY[[[[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1","An instance of array    will always have the datatype\nARRAY. The “inner” type of the array is inferred from\nthe values present, unless the type_    keyword argument is passed:","array([[[[&#39;foo&#39;, &#39;bar&#39;], type_=CHAR)","Multidimensional arrays are produced by nesting array    constructs.\nThe dimensionality of the final ARRAY    type is calculated by\nrecursively adding the dimensions of the inner ARRAY    type:","stmt = sélectionner([[[[\n    array([[[[\n        array([[[[1, 2]), array([[[[3, 4]), array([[[[colonne(&#39;q&#39;), colonne(&#39;x&#39;)])\n    ])\n])\nimpression(stmt.compile(dialect=postgresql.dialect()))","Produces:","SELECT ARRAY[[[[ARRAY[[[[%(param_1)s, %(param_2)s],\nARRAY[[[[%(param_3)s, %(param_4)s], ARRAY[[[[q, x]] AS anon_1","New in version 1.3.6: added support for multidimensional array literals","class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)","Bases: sqlalchemy.types.ARRAY\nPostgreSQL ARRAY type.\nle postgresql.ARRAY    type is constructed in the same way\nas the core types.ARRAY    type; a member type is required, and a\nnumber of dimensions is recommended if the type is to be used for more\nthan one dimension:","de sqlalchemy.dialects importation postgresql","mytable = Table(&quot;mytable&quot;, metadata,\n        Column(&quot;data&quot;, postgresql.ARRAY(Integer, dimensions=2))\n    )","le postgresql.ARRAY    type provides all operations defined on the\ncore types.ARRAY    type, including support for “dimensions”,\nindexed access, and simple matching such as\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().  postgresql.ARRAY    class also\nprovides PostgreSQL-specific methods for containment operations, including\npostgresql.ARRAY.Comparator.contains()\npostgresql.ARRAY.Comparator.contained_by(), et\npostgresql.ARRAY.Comparator.overlap(), e.g.:","mytable.c.Les données.contient([[[[1, 2])","le postgresql.ARRAY    type may not be supported on all\nPostgreSQL DBAPIs; it is currently known to work on psycopg2 only.\nDe plus, le postgresql.ARRAY    type does not work directly in\nconjunction with the ENUM    type.  For a workaround, see the\nspecial type at Using ENUM with ARRAY.","class Comparator(expr)","Bases: sqlalchemy.types.Comparator\nDefine comparison operations for ARRAY.\nNote that these operations are in addition to those provided\nby the base types.ARRAY.Comparator    class, including\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().","contained_by(other)","Boolean expression.  Test if elements are a proper subset of the\nelements of the argument array expression.","contient(other, **kwargs)","Boolean expression.  Test if elements are a superset of the\nelements of the argument array expression.","overlap(other)","Boolean expression.  Test if array has elements in common with\nan argument array expression.","__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)","Construct an ARRAY.\nE.g.:","Column(&#39;myarray&#39;, ARRAY(Integer))","Arguments are:","Paramètres","item_type – The data type of items of this array. Note that\ndimensionality is irrelevant here, so multi-dimensional arrays like\nINTEGER[][], are constructed as ARRAY(Integer), not as\nARRAY(ARRAY(Integer))    or such.","as_tuple=False – Specify whether return results\nshould be converted to tuples from lists. DBAPIs such\nas psycopg2 return lists by default. When tuples are\nreturned, the results are hashable.","dimensions – if non-None, the ARRAY will assume a fixed\nnumber of dimensions.  This will cause the DDL emitted for this\nARRAY to include the exact number of bracket clauses [],\nand will also optimize the performance of the type overall.\nNote that PG arrays are always implicitly “non-dimensioned”,\nmeaning they can store any number of dimensions no matter how\nthey were declared.","zero_indexes=False &#8211; \nwhen True, index values will be converted\nbetween Python zero-based and PostgreSQL one-based indexes, e.g.\na value of one will be added to all index values before passing\nto the database.","sqlalchemy.dialects.postgresql.array_agg(*arg, **kw)","PostgreSQL-specific form of array_agg, ensures\nreturn type is postgresql.ARRAY    and not\nthe plain types.ARRAY, unless an explicit type_\nis passed.","sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=)","A synonym for the ARRAY.Comparator.any()    method.\nThis method is legacy and is here for backwards-compatibility.","sqlalchemy.dialects.postgresql.Tout(other, arrexpr, operator=)","A synonym for the ARRAY.Comparator.all()    method.\nThis method is legacy and is here for backwards-compatibility.","class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)","Bases: sqlalchemy.types.TypeEngine","class sqlalchemy.dialects.postgresql.BYTEA(length=None)","Bases: sqlalchemy.types.LargeBinary","__init__(length=None)","Construct a LargeBinary type.","Paramètres","length – optional, a length for the column for use in\nDDL statements, for those binary types that accept a length,\nsuch as the MySQL BLOB type.","class sqlalchemy.dialects.postgresql.CIDR","Bases: sqlalchemy.types.TypeEngine","class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None)","Bases: sqlalchemy.types.Float","__init__(precision=None, asdecimal=False, decimal_return_scale=None)","Construct a Float.","Paramètres","précision – the numeric precision for use in DDL CREATE\nTABLE.","asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.","decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.","class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)","Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum\nPostgreSQL ENUM type.\nThis is a subclass of types.Enum    which includes\nsupport for PG’s CREATE TYPE    et DROP TYPE.\nWhen the builtin type types.Enum    is used and the\nEnum.native_enum    flag is left at its default of\nTrue, the PostgreSQL backend will use a postgresql.ENUM\ntype as the implementation, so the special create/drop rules\nwill be used.\nThe create/drop behavior of ENUM is necessarily intricate, due to the\nawkward relationship the ENUM type has in relationship to the\nparent table, in that it may be “owned” by just a single table, or\nmay be shared among many tables.\nWhen using types.Enum    ou postgresql.ENUM\nin an “inline” fashion, the CREATE TYPE    et DROP TYPE    is emitted\ncorresponding to when the Table.create()    et Table.drop()\nmethods are called:","table = Table(&#39;sometable&#39;, metadata,\n    Column(&#39;some_enum&#39;, ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;))\n)","table.create(engine)  # will emit CREATE ENUM and CREATE TABLE\ntable.drop(engine)  # will emit DROP TABLE and DROP ENUM","To use a common enumerated type between multiple tables, the best\npractice is to declare the types.Enum    ou\npostgresql.ENUM    independently, and associate it with the\nMetaData    object itself:","my_enum = ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;, metadata=metadata)","t1 = Table(&#39;sometable_one&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)","t2 = Table(&#39;sometable_two&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)","When this pattern is used, care must still be taken at the level\nof individual table creates.  Emitting CREATE TABLE without also\nspecifying checkfirst=True    will still cause issues:","t1.create(engine) # will fail: no such type &#39;myenum&#39;","If we specify checkfirst=True, the individual table-level create\noperation will check for the ENUM    and create if not exists:","# will check if enum exists, and emit CREATE TYPE if not\nt1.create(engine, checkfirst=True)","When using a metadata-level ENUM type, the type will always be created\nand dropped if either the metadata-wide create/drop is called:","metadata.create_all(engine)  # will emit CREATE TYPE\nmetadata.drop_all(engine)  # will emit DROP TYPE","The type can also be created and dropped directly:","my_enum.create(engine)\nmy_enum.drop(engine)","Changed in version 1.0.0: The PostgreSQL postgresql.ENUM    type\nnow behaves more strictly with regards to CREATE/DROP.  A metadata-level\nENUM type will only be created and dropped at the metadata level,\nnot the table level, with the exception of\ntable.create(checkfirst=True).\nle table.drop()    call will now emit a DROP TYPE for a table-level\nenumerated type.","__init__(*enums, **kw)","Construct an ENUM.\nArguments are the same as that of\ntypes.Enum, but also including\nthe following parameters.","Paramètres","create_type – Defaults to True.\nIndicates that CREATE TYPE    should be\nemitted, after optionally checking for the\npresence of the type, when the parent\ntable is being created; and additionally\ncette DROP TYPE    is called when the table\nis dropped. Quand Faux, no check\nwill be performed and no CREATE TYPE\nou DROP TYPE    is emitted, unless\ncreate()\nou drop()\nare called directly.\nSetting to Faux    is helpful\nwhen invoking a creation scheme to a SQL file\nwithout access to the actual database &#8211;\nle create()    et\ndrop()    methods can\nbe used to emit SQL to a target bind.","create(bind=None, checkfirst=True)","Émettre CREATE TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL CREATE TYPE, no action is taken.","Paramètres","bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.","checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type does not exist already before\ncreating.","drop(bind=None, checkfirst=True)","Émettre DROP TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL DROP TYPE, no action is taken.","Paramètres","bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.","checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type actually exists before dropping.","class sqlalchemy.dialects.postgresql.HSTORE(text_type=None)","Bases: sqlalchemy.types.Indexable, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL HSTORE type.\nle HSTORE    type stores dictionaries containing strings, e.g.:","data_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, HSTORE)\n)","avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )","HSTORE    provides for a wide range of operations, including:","Index operations:","data_table.c.Les données[[[[&#39;some key&#39;] == &#39;some value&#39;","Containment operations:","data_table.c.Les données.has_key(&#39;some key&#39;)","data_table.c.Les données.has_all([[[[&#39;one&#39;, &#39;two&#39;, &#39;three&#39;])","Concatenation:","data_table.c.Les données + &quot;k1&quot;: &quot;v1&quot;","For a full list of special methods see\nHSTORE.comparator_factory.\nFor usage with the SQLAlchemy ORM, it may be desirable to combine\nthe usage of HSTORE    avec MutableDict    dictionary\nnow part of the sqlalchemy.ext.mutable\nextension.  This extension will allow “in-place” changes to the\ndictionary, e.g. addition of new keys or replacement/removal of existing\nkeys to/from the current dictionary, to produce events which will be\ndetected by the unit of work:","de sqlalchemy.ext.mutable importation MutableDict","class MyClass(Base):\n    __tablename__ = &#39;data_table&#39;","identifiant = Column(Integer, primary_key=True)\n    Les données = Column(MutableDict.as_mutable(HSTORE))","my_object = session.query(MyClass).un()","# in-place mutation, requires Mutable extension\n# in order for the ORM to detect\nmy_object.Les données[[[[&#39;some_key&#39;] = &#39;some value&#39;","session.commit()","When the sqlalchemy.ext.mutable    extension is not used, the ORM\nwill not be alerted to any changes to the contents of an existing\ndictionary, unless that dictionary value is re-assigned to the\nHSTORE-attribute itself, thus generating a change event.","Voir également\nhstore    &#8211; render the PostgreSQL hstore()    une fonction.","class Comparator(expr)","Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator\nDefine comparison operations for HSTORE.","array()","Text array expression.  Returns array of alternating keys and\nvaleurs.","contained_by(other)","Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.","contient(other, **kwargs)","Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.","defined(clé)","Boolean expression.  Test for presence of a non-NULL value for\nthe key.  Note that the key may be a SQLA expression.","effacer(clé)","HStore expression.  Returns the contents of this hstore with the\ngiven key deleted.  Note that the key may be a SQLA expression.","has_all(other)","Boolean expression.  Test for presence of all keys in jsonb","has_any(other)","Boolean expression.  Test for presence of any key in jsonb","has_key(other)","Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.","keys()","Text array expression.  Returns array of keys.","matrix()","Text array expression.  Returns array of [key, value] pairs.","slice(array)","HStore expression.  Returns a subset of an hstore defined by\narray of keys.","vals()","Text array expression.  Returns array of values.","__init__(text_type=None)","Construct a new HSTORE.","Paramètres","text_type &#8211; \nthe type that should be used for indexed values.\nDefaults to types.Text.","bind_processor(dialect)","Return a conversion function for processing bind values.\nReturns a callable which will receive a bind parameter value\nas the sole positional argument and will return a value to\nsend to the DB-API.\nIf processing is not necessary, the method should return None.","Paramètres","dialect – Dialect instance in use.","comparator_factory","alias of HSTORE.Comparator","result_processor(dialect, coltype)","Return a conversion function for processing result row values.\nReturns a callable which will receive a result row column\nvalue as the sole positional argument and will return a value\nto return to the user.\nIf processing is not necessary, the method should return None.","Paramètres","class sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)","Bases: sqlalchemy.sql.functions.GenericFunction\nConstruct an hstore value within a SQL expression using the\nPostgreSQL hstore()    une fonction.\nle hstore    function accepts one or two arguments as described\nin the PostgreSQL documentation.\nE.g.:","de sqlalchemy.dialects.postgresql importation array, hstore","sélectionner([[[[hstore(&#39;key1&#39;, &#39;value1&#39;)])","sélectionner([[[[\n        hstore(\n            array([[[[&#39;key1&#39;, &#39;key2&#39;, &#39;key3&#39;]),\n            array([[[[&#39;value1&#39;, &#39;value2&#39;, &#39;value3&#39;])\n        )\n    ])","Voir également\nHSTORE    &#8211; the PostgreSQL HSTORE    datatype.","type","alias of HSTORE","class sqlalchemy.dialects.postgresql.INET","Bases: sqlalchemy.types.TypeEngine","class sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)","Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval\nPostgreSQL INTERVAL type.\nThe INTERVAL type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000 or zxjdbc.","__init__(precision=None, fields=None)","Construct an INTERVAL.","Paramètres","précision – optional integer precision value","fields &#8211; \nstring fields specifier.  allows storage of fields\nto be limited, such as &quot;YEAR&quot;, &quot;MONTH&quot;, &quot;DAY TO HOUR&quot;,\netc.","class sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)","Bases: sqlalchemy.types.JSON\nRepresent the PostgreSQL JSON type.\nThis type is a specialization of the Core-level types.JSON\ntype.   Be sure to read the documentation for types.JSON    for\nimportant tips regarding treatment of NULL values and ORM use.\nThe operators provided by the PostgreSQL version of JSON\ninclude:","Index operations (the -&gt;    operator):","data_table.c.Les données[[[[&#39;some key&#39;]","data_table.c.Les données[[[[5]","Index operations returning text (the -&gt;&gt;    operator):","data_table.c.Les données[[[[&#39;some key&#39;].astext == &#39;some value&#39;","Index operations with CAST\n(equivalent to CAST(col -&gt;&gt; [&#39;some[&#39;some['some['some key&#39;] AS )):","data_table.c.Les données[[[[&#39;some key&#39;].astext.jeter(Integer) == 5","Path index operations (the #&gt;    operator):","data_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)]","Path index operations returning text (the #&gt;&gt;    operator):","data_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)].astext == &#39;some value&#39;","Changed in version 1.1: le ColumnElement.cast()    operator on\nJSON objects now requires that the JSON.Comparator.astext\nmodifier be called explicitly, if the cast works only from a textual\nstring.","Index operations return an expression object whose type defaults to\nJSON    by default, so that further JSON-oriented instructions\nmay be called upon the result type.\nCustom serializers and deserializers are specified at the dialect level,\nthat is using create_engine(). The reason for this is that when\nusing psycopg2, the DBAPI only allows serializers at the per-cursor\nor per-connection level.   E.g.:","engine = create_engine(&quot;postgresql://scott:tiger@localhost/test&quot;,\n                        json_serializer=my_serialize_fn,\n                        json_deserializer=my_deserialize_fn\n                )","When using the psycopg2 dialect, the json_deserializer is registered\nagainst the database using psycopg2.extras.register_default_json.","class Comparator(expr)","Bases: sqlalchemy.types.Comparator\nDefine comparison operations for JSON.","property astext","On an indexed expression, use the “astext” (e.g. “-&gt;&gt;”)\nconversion when rendered in SQL.\nE.g.:","sélectionner([[[[data_table.c.Les données[[[[&#39;some key&#39;].astext])","__init__(none_as_null=False, astext_type=None)","Construct a JSON    type.","Paramètres","none_as_null &#8211; \nif True, persist the value None    as a\nSQL NULL value, not the JSON encoding of nul. Note that\nwhen this flag is False, the null()    construct can still\nbe used to persist a NULL value:","de sqlalchemy importation nul\nconn.execute(table.insérer(), Les données=nul())","Changed in version 0.9.8: &#8211; Added none_as_null, et null()\nis now supported in order to persist a NULL value.","astext_type &#8211; \nthe type to use for the\nJSON.Comparator.astext\naccessor on indexed attributes.  Defaults to types.Text.","comparator_factory","alias of JSON.Comparator","class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)","Bases: sqlalchemy.dialects.postgresql.json.JSON\nRepresent the PostgreSQL JSONB type.\nle JSONB    type stores arbitrary JSONB format data, e.g.:","data_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, JSONB)\n)","avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )","le JSONB    type includes all operations provided by\nJSON, including the same behaviors for indexing operations.\nIt also adds additional operators specific to JSONB, including\nJSONB.Comparator.has_key(), JSONB.Comparator.has_all(),\nJSONB.Comparator.has_any(), JSONB.Comparator.contains(),\net JSONB.Comparator.contained_by().\nComme le JSON    type, the JSONB    type does not detect\nin-place changes when used with the ORM, unless the\nsqlalchemy.ext.mutable    extension is used.\nCustom serializers and deserializers\nare shared with the JSON    class, using the json_serializer\net json_deserializer    keyword arguments.  These must be specified\nat the dialect level using create_engine(). When using\npsycopg2, the serializers are associated with the jsonb type using\npsycopg2.extras.register_default_jsonb    on a per-connection basis,\nin the same way that psycopg2.extras.register_default_json    is used\nto register these handlers with the json type.","class Comparator(expr)","Bases: sqlalchemy.dialects.postgresql.json.Comparator\nDefine comparison operations for JSON.","contained_by(other)","Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.","contient(other, **kwargs)","Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.","has_all(other)","Boolean expression.  Test for presence of all keys in jsonb","has_any(other)","Boolean expression.  Test for presence of any key in jsonb","has_key(other)","Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.","comparator_factory","alias of JSONB.Comparator","class sqlalchemy.dialects.postgresql.MACADDR","Bases: sqlalchemy.types.TypeEngine","class sqlalchemy.dialects.postgresql.ARGENT","Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL MONEY type.","class sqlalchemy.dialects.postgresql.OID","Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL OID type.","class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)","Bases: sqlalchemy.types.Float\nThe SQL REAL type.","__init__(precision=None, asdecimal=False, decimal_return_scale=None)","Construct a Float.","Paramètres","précision – the numeric precision for use in DDL CREATE\nTABLE.","asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.","decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.","class sqlalchemy.dialects.postgresql.REGCLASS","Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL REGCLASS type.","class sqlalchemy.dialects.postgresql.TSVECTOR","Bases: sqlalchemy.types.TypeEngine\nle postgresql.TSVECTOR    type implements the PostgreSQL\ntext search type TSVECTOR.\nIt can be used to do full text queries on natural language\ndocuments.","class sqlalchemy.dialects.postgresql.UUID(as_uuid=False)","Bases: sqlalchemy.types.TypeEngine\nPostgreSQL UUID type.\nRepresents the UUID column type, interpreting\ndata either as natively returned by the DBAPI\nor as Python uuid objects.\nThe UUID type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000.","__init__(as_uuid=False)","Construct a UUID type.","Paramètres","as_uuid=False – if True, values will be interpreted\nas Python uuid objects, converting to/from string via the\nDBAPI.","Range Types\nThe new range column types found in PostgreSQL 9.2 onwards are\ncatered for by the following types:","class sqlalchemy.dialects.postgresql.INT4RANGE","Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT4RANGE type.","class sqlalchemy.dialects.postgresql.INT8RANGE","Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT8RANGE type.","class sqlalchemy.dialects.postgresql.NUMRANGE","Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL NUMRANGE type.","class sqlalchemy.dialects.postgresql.DATERANGE","Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL DATERANGE type.","class sqlalchemy.dialects.postgresql.TSRANGE","Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSRANGE type.","class sqlalchemy.dialects.postgresql.TSTZRANGE","Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSTZRANGE type.","The types above get most of their functionality from the following\nmixin:","class sqlalchemy.dialects.postgresql.ranges.RangeOperators","This mixin provides functionality for the Range Operators\nlisted in Table 9-44 of the postgres documentation for Range\nFunctions and Operators. It is used by all the range types\nprovided in the postgres    dialect and can likely be used for\nany range types you create yourself.\nNo extra support is provided for the Range Functions listed in\nTable 9-45 of the postgres documentation. For these, the normal\nfunc()    object should be used.","class comparator_factory(expr)","Bases: sqlalchemy.types.Comparator\nDefine comparison operations for range types.","__ne__(other)","Boolean expression. Returns true if two ranges are not equal","adjacent_to(other)","Boolean expression. Returns true if the range in the column\nis adjacent to the range in the operand.","contained_by(other)","Boolean expression. Returns true if the column is contained\nwithin the right hand operand.","contient(other, **kw)","Boolean expression. Returns true if the right hand operand,\nwhich can be an element or a range, is contained within the\ncolumn.","not_extend_left_of(other)","Boolean expression. Returns true if the range in the column\ndoes not extend left of the range in the operand.","not_extend_right_of(other)","Boolean expression. Returns true if the range in the column\ndoes not extend right of the range in the operand.","overlaps(other)","Boolean expression. Returns true if the column overlaps\n(has points in common with) the right hand operand.","strictly_left_of(other)","Boolean expression. Returns true if the column is strictly\nleft of the right hand operand.","strictly_right_of(other)","Boolean expression. Returns true if the column is strictly\nright of the right hand operand.","Attention\nThe range type DDL support should work with any PostgreSQL DBAPI\ndriver, however the data types returned may vary. If you are using\npsycopg2, it’s recommended to upgrade to version 2.5 or later\nbefore using these column types.","When instantiating models that use these column types, you should pass\nwhatever data type is expected by the DBAPI driver you’re using for\nthe column type. Pour psycopg2    these are\npsycopg2.extras.NumericRange,\npsycopg2.extras.DateRange,\npsycopg2.extras.DateTimeRange    et\npsycopg2.extras.DateTimeTZRange    or the class you’ve\nregistered with psycopg2.extras.register_range.\nPar exemple:","de psycopg2.extras importation DateTimeRange\nde sqlalchemy.dialects.postgresql importation TSRANGE","class RoomBooking(Base):","__tablename__ = &#39;room_booking&#39;","room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())","booking = RoomBooking(\n    room=101,\n    pendant=DateTimeRange(datetime(2013, 3, 23), None)\n)","PostgreSQL Constraint Types\nSQLAlchemy supports PostgreSQL EXCLUDE constraints via the\nExcludeConstraint    class:","class sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)","Bases: sqlalchemy.schema.ColumnCollectionConstraint\nA table-level EXCLUDE constraint.\nDefines an EXCLUDE constraint as described in the postgres\ndocumentation.","__init__(*elements, **kw)","Créé un ExcludeConstraint    object.\nE.g.:","const = ExcludeConstraint(\n    (Column(&#39;period&#39;), &#39;&amp;&amp;&#39;),\n    (Column(&#39;group&#39;), &#39;=&#39;),\n    where=(Column(&#39;group&#39;) != &#39;some group&#39;)\n)","The constraint is normally embedded into the Table    construct\ndirectly, or added later using append_constraint():","some_table = Table(\n    &#39;some_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;period&#39;, TSRANGE()),\n    Column(&#39;group&#39;, Chaîne)\n)","some_table.append_constraint(\n    ExcludeConstraint(\n        (some_table.c.period, &#39;&amp;&amp;&#39;),\n        (some_table.c.group, &#39;=&#39;),\n        where=some_table.c.group != &#39;some group&#39;,\n        name=&#39;some_table_excl_const&#39;\n    )\n)","Paramètres","*elements – A sequence of two tuples of the form (column, operator)    where\n“column” is a SQL expression element or a raw SQL string, most\ntypically a Column    object, and “operator” is a string\ncontaining the operator to use.   In order to specify a column name\nwhen a  Column    object is not available, while ensuring\nthat any necessary quoting rules take effect, an ad-hoc\nColumn    ou sql.expression.column()    object should be\nused.","name – Optional, the in-database name of this constraint.","deferrable – Optional bool.  If set, emit DEFERRABLE or NOT DEFERRABLE when\nissuing DDL for this constraint.","initialement – Optional string.  If set, emit INITIALLY  when issuing DDL\nfor this constraint.","using – Optional string.  If set, emit USING  when issuing DDL\nfor this constraint. Defaults to ‘gist’.","where &#8211; \nOptional SQL expression construct or literal SQL string.\nIf set, emit WHERE \n when issuing DDL\nfor this constraint.","Attention\nle ExcludeConstraint.where    argument to ExcludeConstraint    can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given.  DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.","Par exemple:\nde sqlalchemy.dialects.postgresql importation ExcludeConstraint, TSRANGE","class RoomBooking(Base):","__tablename__ = &#39;room_booking&#39;","room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())","__table_args__ = (\n        ExcludeConstraint((&#39;room&#39;, &#39;=&#39;), (&#39;during&#39;, &#39;&amp;&amp;&#39;)),\n    )","PostgreSQL DML Constructs","sqlalchemy.dialects.postgresql.dml.insérer(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)","Construct a new Insert    object.\nThis constructor is mirrored as a public API function; voir insert()    for a full usage and argument description.","class sqlalchemy.dialects.postgresql.dml.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)","Bases: sqlalchemy.sql.expression.Insert\nPostgreSQL-specific implementation of INSERT.\nAdds methods for PG-specific syntaxes such as ON CONFLICT.","excluded","Provide the excluded    namespace for an ON CONFLICT statement\nPG’s ON CONFLICT clause allows reference to the row that would\nbe inserted, known as excluded. This attribute provides\nall columns in this row to be referenceable.","on_conflict_do_nothing(constraint=None, index_elements=None, index_where=None)","Specifies a DO NOTHING action for ON CONFLICT clause.\nle constraint    et index_elements    arguments\nare optional, but only one of these can be specified.","Paramètres","constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.","index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.","index_where &#8211; \nAdditional WHERE criterion that can be used to infer a\nconditional target index.","on_conflict_do_update(constraint=None, index_elements=None, index_where=None, set_=None, where=None)","Specifies a DO UPDATE SET action for ON CONFLICT clause.\nEither the constraint    ou index_elements    argument is\nrequired, but only one of these can be specified.","Paramètres","constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.","index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.","index_where – Additional WHERE criterion that can be used to infer a\nconditional target index.","set_ &#8211; \nRequired argument. A dictionary or other mapping object\nwith column names as keys and expressions or literals as values,\nspecifying the SET    actions to take.\nIf the target Column    specifies a “.key” attribute distinct\nfrom the column name, that key should be used.","Attention\nThis dictionary does ne pas take into account\nPython-specified default UPDATE values or generation functions,\ne.g. those specified using Column.onupdate.\nThese values will not be exercised for an ON CONFLICT style of\nUPDATE, unless they are manually specified in the\nInsert.on_conflict_do_update.set_    dictionary.","where &#8211; \nOptional argument. If present, can be a literal SQL\nstring or an acceptable expression for a WHERE    clause\nthat restricts the rows affected by DO UPDATE SET. Rows\nnot meeting the WHERE    condition will not be updated\n(effectively a DO NOTHING    for those rows).","psycopg2\nSupport for the PostgreSQL database via the psycopg2 driver.","DBAPI\nDocumentation and download information (if applicable) for psycopg2 is available at:\nhttp://pypi.python.org/pypi/psycopg2/","Connecting\nConnect String:","postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]","psycopg2 Connect Arguments\npsycopg2-specific keyword arguments which are accepted by\ncreate_engine()    sont:","server_side_cursors: Enable the usage of “server side cursors” for SQL\nstatements which support this feature. What this essentially means from a\npsycopg2 point of view is that the cursor is created using a name, e.g.\nconnection.cursor(&#39;some name&#39;), which has the effect that result rows\nare not immediately pre-fetched and buffered after statement execution, but\nare instead left on the server and only retrieved as needed. SQLAlchemy’s\nResultProxy    uses special row-buffering\nbehavior when this feature is enabled, such that groups of 100 rows at a\ntime are fetched over the wire to reduce conversational overhead.\nNote that the Connection.execution_options.stream_results\nexecution option is a more targeted\nway of enabling this mode on a per-execution basis.","use_native_unicode: Enable the usage of Psycopg2 “native unicode” mode\nper connection.  True by default.","isolation_level: This option, available for all PostgreSQL dialects,\ncomprend le AUTOCOMMIT    isolation level when using the psycopg2\ndialect.","client_encoding: sets the client encoding in a libpq-agnostic way,\nusing psycopg2’s set_client_encoding()    method.","executemany_mode, executemany_batch_page_size,\nexecutemany_values_page_size: Allows use of psycopg2\nextensions for optimizing “executemany”-stye queries.  See the referenced\nsection below for details.","use_batch_mode: this is the previous setting used to affect “executemany”\nmode and is now deprecated.","Unix Domain Connections\npsycopg2 supports connecting via Unix domain connections.   When the hôte\nportion of the URL is omitted, SQLAlchemy passes None    to psycopg2,\nwhich specifies Unix-domain communication rather than TCP/IP communication:","create_engine(&quot;postgresql+psycopg2://user:password@/dbname&quot;)","By default, the socket file used is to connect to a Unix-domain socket\ndans /tmp, or whatever socket directory was specified when PostgreSQL\nwas built.  This value can be overridden by passing a pathname to psycopg2,\nusing hôte    as an additional keyword argument:","create_engine(&quot;postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql&quot;)","Empty DSN Connections / Environment Variable Connections\nThe psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the\nlibpq client library, which by default indicates to connect to a localhost\nPostgreSQL database that is open for “trust” connections.  This behavior can be\nfurther tailored using a particular set of environment variables which are\nprefixed with PG_..., which are  consumed by libpq    to take the place of\nany or all elements of the connection string.\nFor this form, the URL can be passed without any elements other than the\ninitial scheme:","engine = create_engine(&#39;postgresql+psycopg2://&#39;)","In the above form, a blank “dsn” string is passed to the psycopg2.connect()\nfunction which in turn represents an empty DSN passed to libpq.","New in version 1.3.2: support for parameter-less connections with psycopg2.","Voir également\nEnvironment Variables &#8211;\nPostgreSQL documentation on how to use PG_...\nenvironment variables for connections.","Per-Statement/Connection Execution Options\nThe following DBAPI-specific options are respected when used with\nConnection.execution_options(), Executable.execution_options(),\nQuery.execution_options(), in addition to those not specific to DBAPIs:","isolation_level    &#8211; Set the transaction isolation level for the lifespan\nd&#39;un Connection    (can only be set on a connection, not a statement\nor query).   See Psycopg2 Transaction Isolation Level.","stream_results    &#8211; Enable or disable usage of psycopg2 server side\ncursors &#8211; this feature makes use of “named” cursors in combination with\nspecial result handling methods so that result rows are not fully buffered.\nSi None    or not set, the server_side_cursors    option of the\nMoteur    est utilisé.","max_row_buffer    &#8211; when using stream_results, an integer value that\nspecifies the maximum number of rows to buffer at a time.  This is\ninterpreted by the BufferedRowResultProxy, and if omitted the\nbuffer will grow to ultimately store 1000 rows at a time.","Psycopg2 Fast Execution Helpers\nModern versions of psycopg2 include a feature known as\nFast Execution Helpers , which\nhave been shown in benchmarking to improve psycopg2’s executemany()\nperformance, primarily with INSERT statements, by multiple orders of magnitude.\nSQLAlchemy allows this extension to be used for all executemany()    style\ncalls invoked by an Moteur    when used with multiple parameter\nensembles, which includes the use of this feature both by the\nCore as well as by the ORM for inserts of objects with non-autogenerated\nprimary key values, by adding the executemany_mode    flag to\ncreate_engine():","engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;batch&#39;)","Changed in version 1.3.7: &#8211; le use_batch_mode    flag has been superseded\nby a new parameter executemany_mode    which provides support both for\npsycopg2’s execute_batch    helper as well as the execute_values\nhelper.","Possible options for executemany_mode    include:","None    &#8211; By default, psycopg2’s extensions are not used, and the usual\ncursor.executemany()    method is used when invoking batches of statements.","&#39;batch&#39;    &#8211; Uses psycopg2.extras.execute_batch    so that multiple copies\nof a SQL query, each one corresponding to a parameter set passed to\nexecutemany(), are joined into a single SQL string separated by a\nsemicolon.   This is the same behavior as was provided by the\nuse_batch_mode=True    flag.","&#39;values&#39;&#8211; For Core insert()    constructs only (including those\nemitted by the ORM automatically), the psycopg2.extras.execute_values\nextension is used so that multiple parameter sets are grouped into a single\nINSERT statement and joined together with multiple VALUES expressions. Cette\nmethod requires that the string text of the VALUES clause inside the\nINSERT statement is manipulated, so is only supported with a compiled\ninsert()    construct where the format is predictable.  For all other\nconstructs,  including plain textual INSERT statements not rendered  by the\nSQLAlchemy expression language compiler, the\npsycopg2.extras.execute_batch        method is used.   It is therefore important\nto note that “values” mode implies that “batch” mode is also used for\nall statements for which “values” mode does not apply.","For both strategies, the executemany_batch_page_size    et\nexecutemany_values_page_size    arguments control how many parameter sets\nshould be represented in each execution.  Because “values” mode implies a\nfallback down to “batch” mode for non-INSERT statements, there are two\nindependent page size arguments.  For each, the default value of None    means\nto use psycopg2’s defaults, which at the time of this writing are quite low at\n100.   For the execute_values    method, a number as high as 10000 may prove\nto be performant, whereas for execute_batch, as the number represents\nfull statements repeated, a number closer to the default of 100 is likely\nmore appropriate:","engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;values&#39;,\n    executemany_values_page_size=10000, executemany_batch_page_size=500)","Changed in version 1.3.7: &#8211; Added support for\npsycopg2.extras.execute_values. le use_batch_mode    flag is\nsuperseded by the executemany_mode    flag.","Unicode with Psycopg2\nBy default, the psycopg2 driver uses the psycopg2.extensions.UNICODE\nextension, such that the DBAPI receives and returns all strings as Python\nUnicode objects directly &#8211; SQLAlchemy passes these values through without\nchange.   Psycopg2 here will encode/decode string values based on the\ncurrent “client encoding” setting; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf8, as a more useful default:","# postgresql.conf file","# client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8","A second way to affect the client encoding is to set it within Psycopg2\nlocally.   SQLAlchemy will call psycopg2’s\npsycopg2:connection.set_client_encoding()    method\non all new connections based on the value passed to\ncreate_engine()    using the client_encoding    parameter:","# set_client_encoding() setting;\n# works for *all* PostgreSQL versions\nengine = create_engine(&quot;postgresql://user:pass@host/dbname&quot;,\n                       client_encoding=&#39;utf8&#39;)","This overrides the encoding specified in the PostgreSQL client configuration.\nWhen using the parameter in this way, the psycopg2 driver emits\nSET client_encoding TO &#39;utf8&#39;    on the connection explicitly, and works\nin all PostgreSQL versions.\nNote that the client_encoding    setting as passed to create_engine()\nest not the same as the more recently added client_encoding    parameter\nnow supported by libpq directly.   This is enabled when client_encoding\nis passed directly to psycopg2.connect(), and from SQLAlchemy is passed\nusing the create_engine.connect_args    parameter:","engine = create_engine(\n    &quot;postgresql://user:pass@host/dbname&quot;,\n    connect_args=&#39;client_encoding&#39;: &#39;utf8&#39;)","# using the query string is equivalent\nengine = create_engine(&quot;postgresql://user:pass@host/dbname?client_encoding=utf8&quot;)","The above parameter was only added to libpq as of version 9.1 of PostgreSQL,\nso using the previous method is better for cross-version support.","Disabling Native Unicode\nSQLAlchemy can also be instructed to skip the usage of the psycopg2\nUNICODE    extension and to instead utilize its own unicode encode/decode\nservices, which are normally reserved only for those DBAPIs that don’t\nfully support unicode directly.  Passing use_native_unicode=False    à\ncreate_engine()    will disable usage of psycopg2.extensions.UNICODE.\nSQLAlchemy will instead encode data itself into Python bytestrings on the way\nin and coerce from bytes on the way back,\nusing the value of the create_engine() encoding    parameter, which\ndefaults to utf-8.\nSQLAlchemy’s own unicode encode/decode functionality is steadily becoming\nobsolete as most DBAPIs now support unicode fully.","Bound Parameter Styles\nThe default parameter style for the psycopg2 dialect is “pyformat”, where\nSQL is rendered using %(paramname)s    style.   This format has the limitation\nthat it does not accommodate the unusual case of parameter names that\nactually contain percent or parenthesis symbols; as SQLAlchemy in many cases\ngenerates bound parameter names based on the name of a column, the presence\nof these characters in a column name can lead to problems.\nThere are two solutions to the issue of a schema.Column    that contains\none of these characters in its name.  One is to specify the\nschema.Column.key    for columns that have such names:","measurement = Table(&#39;measurement&#39;, metadata,\n    Column(&#39;Size (meters)&#39;, Integer, clé=&#39;size_meters&#39;)\n)","Above, an INSERT statement such as measurement.insert()    will use\nsize_meters    as the parameter name, and a SQL expression such as\nmeasurement.c.size_meters &gt; dix    will derive the bound parameter name\nfrom the size_meters    key as well.","Changed in version 1.0.0: &#8211; SQL expressions will use Column.key\nas the source of naming when anonymous bound parameters are created\nin SQL expressions; previously, this behavior only applied to\nTable.insert()    et Table.update()    parameter names.","The other solution is to use a positional format; psycopg2 allows use of the\n“format” paramstyle, which can be passed to\ncreate_engine.paramstyle:","engine = create_engine(\n    &#39;postgresql://scott:tiger@localhost:5432/test&#39;, paramstyle=&#39;format&#39;)","With the above engine, instead of a statement like:","INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%(Taille (meters))s)\n&#39;Size (meters)&#39;: 1","we instead see:","INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%s)\n(1, )","Where above, the dictionary style is converted into a tuple with positional\nstyle.","Transactions\nThe psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.","Psycopg2 Transaction Isolation Level\nAs discussed in Transaction Isolation Level,\nall PostgreSQL dialects support setting of transaction isolation level\nboth via the isolation_level    parameter passed to create_engine(),\nas well as the isolation_level    argument used by\nConnection.execution_options(). When using the psycopg2 dialect, these\noptions make use of psycopg2’s set_isolation_level()    connection method,\nrather than emitting a PostgreSQL directive; this is because psycopg2’s\nAPI-level setting is always emitted at the start of each transaction in any\nCas.\nThe psycopg2 dialect supports these constants for isolation level:","READ COMMITTED","READ UNCOMMITTED","REPEATABLE READ","SERIALIZABLE","AUTOCOMMIT","NOTICE logging\nThe psycopg2 dialect will log PostgreSQL NOTICE messages\nvia the sqlalchemy.dialects.postgresql    logger.  When this logger\nis set to the logging.INFO    level, notice messages will be logged:","importation logging","logging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)","Above, it is assumed that logging is configured externally.  If this is not\nthe case, configuration such as logging.basicConfig()    must be utilized:","importation logging","logging.basicConfig()   # log messages to stdout\nlogging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)","HSTORE type\nle psycopg2    DBAPI includes an extension to natively handle marshalling of\nthe HSTORE type.   The SQLAlchemy psycopg2 dialect will enable this extension\nby default when psycopg2 version 2.4 or greater is used, and\nit is detected that the target database has the HSTORE type set up for use.\nIn other words, when the dialect makes the first\nconnection, a sequence like the following is performed:","Request the available HSTORE oids using\npsycopg2.extras.HstoreAdapter.get_oids().\nIf this function returns a list of HSTORE identifiers, we then determine\nque le HSTORE    extension is present.\nThis function is skipped if the version of psycopg2 installed is\nless than version 2.4.","If the use_native_hstore    flag is at its default of True, et\nwe’ve detected that HSTORE    oids are available, the\npsycopg2.extensions.register_hstore()    extension is invoked for all\nles liaisons.","le register_hstore()    extension has the effect of all Python\ndictionaries being accepted as parameters regardless of the type of target\ncolumn in SQL. The dictionaries are converted by this extension into a\ntextual HSTORE expression.  If this behavior is not desired, disable the\nuse of the hstore extension by setting use_native_hstore    à Faux    comme\nfollows:","engine = create_engine(&quot;postgresql+psycopg2://scott:tiger@localhost/test&quot;,\n            use_native_hstore=Faux)","le HSTORE    type is still supported when the\npsycopg2.extensions.register_hstore()    extension is not used.  It merely\nmeans that the coercion between Python dictionaries and the HSTORE\nstring format, on both the parameter side and the result side, will take\nplace within SQLAlchemy’s own marshalling logic, and not that of psycopg2\nwhich may be more performant.","pg8000\nSupport for the PostgreSQL database via the pg8000 driver.","DBAPI\nDocumentation and download information (if applicable) for pg8000 is available at:\nhttps://pythonhosted.org/pg8000/","Connecting\nConnect String:","postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]","Remarque\nThe pg8000 dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.","Unicode\npg8000 will encode / decode string values between it and the server using the\nPostgreSQL client_encoding    parameter; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf-8, as a more useful default:","#client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8","le client_encoding    can be overridden for a session by executing the SQL:\nSET CLIENT_ENCODING TO ‘utf8’;\nSQLAlchemy will execute this SQL on all new connections based on the value\npassed to create_engine()    using the client_encoding    parameter:","engine = create_engine(\n    &quot;postgresql+pg8000://user:pass@host/dbname&quot;, client_encoding=&#39;utf8&#39;)","pg8000 Transaction Isolation Level\nThe pg8000 dialect offers the same isolation level settings as that\nof the psycopg2 dialect:","READ COMMITTED","READ UNCOMMITTED","REPEATABLE READ","SERIALIZABLE","AUTOCOMMIT","New in version 0.9.5: support for AUTOCOMMIT isolation level when using\npg8000.","psycopg2cffi\nSupport for the PostgreSQL database via the psycopg2cffi driver.","DBAPI\nDocumentation and download information (if applicable) for psycopg2cffi is available at:\nhttp://pypi.python.org/pypi/psycopg2cffi/","Connecting\nConnect String:","postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]","psycopg2cffi    is an adaptation of psycopg2, using CFFI for the C\ncouche. This makes it suitable for use in e.g. PyPy. Documentation\nis as per psycopg2.","py-postgresql\nSupport for the PostgreSQL database via the py-postgresql driver.","DBAPI\nDocumentation and download information (if applicable) for py-postgresql is available at:\nhttp://python.projects.pgfoundry.org/","Connecting\nConnect String:","postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]","Remarque\nThe pypostgresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndriver is psycopg2.","pygresql\nSupport for the PostgreSQL database via the pygresql driver.","DBAPI\nDocumentation and download information (if applicable) for pygresql is available at:\nhttp://www.pygresql.org/","Connecting\nConnect String:","postgresql+pygresql://user:password@host:port/dbname[?key=value&key=value...]","Remarque\nThe pygresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.","zxjdbc\nSupport for the PostgreSQL database via the zxJDBC for Jython driver.","DBAPI\nDrivers for this database are available at:\nhttp://jdbc.postgresql.org/","Connecting\nConnect String:","postgresql+zxjdbc://scott:tiger@localhost/db","Click to rate this post!\n                                   \n                               [Total: 0  Average: 0]"],"content_blocks":[{"id":"text-1","type":"text","heading":"","plain_text":"Prise en charge de la base de données PostgreSQL.\nPrise en charge DBAPI\nLes options dialect / DBAPI suivantes sont disponibles. Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion.","html":"<p>Prise en charge de la base de données PostgreSQL.\nPrise en charge DBAPI\nLes options dialect / DBAPI suivantes sont disponibles. Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion.</p>"},{"id":"text-2","type":"text","heading":"","plain_text":"Séquences / SERIAL / IDENTITY\nPostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut\nde créer de nouvelles valeurs de clé primaire pour les colonnes de clé primaire basées sur des nombres entiers. Quand\ncréer des tables, SQLAlchemy va publier le EN SÉRIE    type de données pour\ncolonnes de clé primaire basées sur des nombres entiers, qui génèrent une séquence et un côté serveur\ndéfaut correspondant à la colonne.\nPour spécifier une séquence nommée spécifique à utiliser pour la génération de clé primaire,\nUtilisez le Séquence()    construction:","html":"<p>Séquences / SERIAL / IDENTITY\nPostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut\nde créer de nouvelles valeurs de clé primaire pour les colonnes de clé primaire basées sur des nombres entiers. Quand\ncréer des tables, SQLAlchemy va publier le EN SÉRIE    type de données pour\ncolonnes de clé primaire basées sur des nombres entiers, qui génèrent une séquence et un côté serveur\ndéfaut correspondant à la colonne.\nPour spécifier une séquence nommée spécifique à utiliser pour la génération de clé primaire,\nUtilisez le Séquence()    construction:</p>"},{"id":"text-3","type":"text","heading":"","plain_text":"Table(&#39;quelque chose&#39;, métadonnées,\n        Colonne(&#39;id&#39;, Entier, Séquence(&#39;some_id_seq&#39;), clé primaire=Vrai)\n    )","html":"<p>Table(&#039;quelque chose&#039;, métadonnées,\n        Colonne(&#039;id&#039;, Entier, Séquence(&#039;some_id_seq&#039;), clé primaire=Vrai)\n    )</p>"},{"id":"text-4","type":"text","heading":"","plain_text":"Lorsque SQLAlchemy émet une seule instruction INSERT, pour remplir le contrat de\nayant le &quot;dernier identifiant d&#39;insertion&quot; disponible, une clause RETURNING est ajoutée à\nl&#39;instruction INSERT qui spécifie les colonnes de clé primaire doit être\nretourné une fois la déclaration terminée. La fonctionnalité RETURNING ne prend que\nplace si PostgreSQL 8.2 ou version ultérieure est utilisé. Dans une approche de repli, le\nséquence, spécifiée explicitement ou implicitement via EN SÉRIE, est\npréalablement exécutée indépendamment, la valeur renvoyée à utiliser dans la\ninsertion ultérieure. Notez que lorsqu&#39;un\ninsérer()    la construction est exécutée en utilisant\nSémantique «executemany», la fonctionnalité «dernier identifiant inséré» ne\nappliquer; aucune clause RETURNING n’est émise et la séquence n’a pas été pré-exécutée dans cette\nCas.\nPour forcer l&#39;utilisation de RETURNING par défaut, spécifiez l&#39;indicateur.\nimplicit_returning = False    à create_engine ().","html":"<p>Lorsque SQLAlchemy émet une seule instruction INSERT, pour remplir le contrat de\nayant le &quot;dernier identifiant d&#039;insertion&quot; disponible, une clause RETURNING est ajoutée à\nl&#039;instruction INSERT qui spécifie les colonnes de clé primaire doit être\nretourné une fois la déclaration terminée. La fonctionnalité RETURNING ne prend que\nplace si PostgreSQL 8.2 ou version ultérieure est utilisé. Dans une approche de repli, le\nséquence, spécifiée explicitement ou implicitement via EN SÉRIE, est\npréalablement exécutée indépendamment, la valeur renvoyée à utiliser dans la\ninsertion ultérieure. Notez que lorsqu&#039;un\ninsérer()    la construction est exécutée en utilisant\nSémantique «executemany», la fonctionnalité «dernier identifiant inséré» ne\nappliquer; aucune clause RETURNING n’est émise et la séquence n’a pas été pré-exécutée dans cette\nCas.\nPour forcer l&#039;utilisation de RETURNING par défaut, spécifiez l&#039;indicateur.\nimplicit_returning = False    à create_engine ().</p>"},{"id":"text-5","type":"text","heading":"","plain_text":"Colonnes PostgreSQL 10 IDENTITY\nPostgreSQL 10 a une nouvelle fonctionnalité IDENTITY qui remplace l’utilisation de SERIAL.\nLe support intégré pour le rendu de IDENTITY n’est pas encore disponible, mais le\nle crochet de compilation suivant peut être utilisé pour remplacer les occurrences de SERIAL par\nIDENTITÉ:","html":"<p>Colonnes PostgreSQL 10 IDENTITY\nPostgreSQL 10 a une nouvelle fonctionnalité IDENTITY qui remplace l’utilisation de SERIAL.\nLe support intégré pour le rendu de IDENTITY n’est pas encore disponible, mais le\nle crochet de compilation suivant peut être utilisé pour remplacer les occurrences de SERIAL par\nIDENTITÉ:</p>"},{"id":"text-6","type":"text","heading":"","plain_text":"de sqlalchemy.schema importation CreateColumn\nde sqlalchemy.ext.compiler importation compile","html":"<p>de sqlalchemy.schema importation CreateColumn\nde sqlalchemy.ext.compiler importation compile</p>"},{"id":"text-7","type":"text","heading":"","plain_text":"@compiles(CreateColumn, &#39;postgresql&#39;)\ndef use_identity(élément, compilateur, **kw):\n    texte = compilateur.visit_create_column(élément, **kw)\n    texte = texte.remplacer(&quot;EN SÉRIE&quot;, &quot;INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ&quot;)\n    revenir texte","html":"<p>@compiles(CreateColumn, &#039;postgresql&#039;)\ndef use_identity(élément, compilateur, **kw):\n    texte = compilateur.visit_create_column(élément, **kw)\n    texte = texte.remplacer(&quot;EN SÉRIE&quot;, &quot;INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ&quot;)\n    revenir texte</p>"},{"id":"text-8","type":"text","heading":"","plain_text":"En utilisant ce qui précède, un tableau tel que:","html":"<p>En utilisant ce qui précède, un tableau tel que:</p>"},{"id":"text-9","type":"text","heading":"","plain_text":"t = Table(\n    &#39;t&#39;, m,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, Chaîne)\n)","html":"<p>t = Table(\n    &#039;t&#039;, m,\n    Colonne(&#039;id&#039;, Entier, clé primaire=Vrai),\n    Colonne(&#039;Les données&#039;, Chaîne)\n)</p>"},{"id":"text-10","type":"text","heading":"","plain_text":"Générera sur la base de données de sauvegarde en tant que:","html":"<p>Générera sur la base de données de sauvegarde en tant que:</p>"},{"id":"text-11","type":"text","heading":"","plain_text":"CRÉER TABLE t (\n    identifiant INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ NE PAS NUL,\n    Les données VARCHAR,\n    PRIMAIRE CLÉ (identifiant)\n)","html":"<p>CRÉER TABLE t (\n    identifiant INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ NE PAS NUL,\n    Les données VARCHAR,\n    PRIMAIRE CLÉ (identifiant)\n)</p>"},{"id":"text-12","type":"text","heading":"","plain_text":"Niveau d&#39;isolation de la transaction\nTous les dialectes PostgreSQL supportent la définition du niveau d&#39;isolation des transactions\nà la fois via un paramètre spécifique au dialecte\ncreate_engine.isolation_level    accepté par create_engine (),\naussi bien que Connection.execution_options.isolation_level\nargument passé à Connection.execution_options ().\nLors de l’utilisation d’un dialecte autre que psycopg2, cette fonction fonctionne en lançant la commande\nENSEMBLE SESSION LES CARACTÉRISTIQUES COMME TRANSACTION ISOLEMENT NIVEAU     pour\nchaque nouvelle connexion. Pour le niveau d&#39;isolement AUTOCOMMIT spécial,\nDes techniques spécifiques à DBAPI sont utilisées.\nPour définir le niveau d&#39;isolement à l&#39;aide de create_engine ():","html":"<p>Niveau d&#039;isolation de la transaction\nTous les dialectes PostgreSQL supportent la définition du niveau d&#039;isolation des transactions\nà la fois via un paramètre spécifique au dialecte\ncreate_engine.isolation_level    accepté par create_engine (),\naussi bien que Connection.execution_options.isolation_level\nargument passé à Connection.execution_options ().\nLors de l’utilisation d’un dialecte autre que psycopg2, cette fonction fonctionne en lançant la commande\nENSEMBLE SESSION LES CARACTÉRISTIQUES COMME TRANSACTION ISOLEMENT NIVEAU     pour\nchaque nouvelle connexion. Pour le niveau d&#039;isolement AUTOCOMMIT spécial,\nDes techniques spécifiques à DBAPI sont utilisées.\nPour définir le niveau d&#039;isolement à l&#039;aide de create_engine ():</p>"},{"id":"text-13","type":"text","heading":"","plain_text":"moteur = create_engine(\n    &quot;postgresql + pg8000: // scott: tiger @ localhost / test&quot;,\n    niveau_isolement=&quot;READ UNCOMMITTED&quot;\n)","html":"<p>moteur = create_engine(\n    &quot;postgresql + pg8000: // scott: tiger @ localhost / test&quot;,\n    niveau_isolement=&quot;READ UNCOMMITTED&quot;\n)</p>"},{"id":"text-14","type":"text","heading":"","plain_text":"Pour définir à l&#39;aide des options d&#39;exécution par connexion:","html":"<p>Pour définir à l&#039;aide des options d&#039;exécution par connexion:</p>"},{"id":"text-15","type":"text","heading":"","plain_text":"lien = moteur.relier()\nlien = lien.execution_options(\n    niveau_isolement=&quot;LIRE ENGAGÉ&quot;\n)","html":"<p>lien = moteur.relier()\nlien = lien.execution_options(\n    niveau_isolement=&quot;LIRE ENGAGÉ&quot;\n)</p>"},{"id":"text-16","type":"text","heading":"","plain_text":"Valeurs valides pour niveau_isolement    comprendre:","html":"<p>Valeurs valides pour niveau_isolement    comprendre:</p>"},{"id":"text-17","type":"text","heading":"","plain_text":"Introspection de la table de schémas distants et chemin de recherche PostgreSQL\nTL; DR;: garder le chemin_recherche    variable définie à sa valeur par défaut de Publique,\nnommer des schémas autre que Publique    explicitement dans Table    définitions.\nLe dialecte PostgreSQL peut refléter les tables de n’importe quel schéma. le\nTable.schema    argument, ou bien la\nMetaData.reflect.schema    l&#39;argument détermine quel schéma sera\nêtre recherché pour la ou les tables. Le reflété Table    objets\nconservera dans tous les cas cette .schéma    attribut comme spécifié.\nCependant, en ce qui concerne les tableaux que ces Table    les objets font référence à\nvia une contrainte de clé étrangère, une décision doit être prise quant à la .schéma\nest représenté dans ces tables distantes, dans le cas où cette distance\nnom de schéma est également un membre du courant\nChemin de recherche PostgreSQL.\nPar défaut, le dialecte PostgreSQL reproduit le comportement encouragé par\nPostgreSQL propre pg_get_constraintdef ()    procédure intégrée. Cette fonction\nrenvoie un exemple de définition pour une contrainte de clé étrangère particulière,\nomettant le nom de schéma référencé de cette définition lorsque le nom est\négalement dans le chemin de recherche du schéma PostgreSQL. L&#39;interaction ci-dessous\nillustre ce comportement:","html":"<p>Introspection de la table de schémas distants et chemin de recherche PostgreSQL\nTL; DR;: garder le chemin_recherche    variable définie à sa valeur par défaut de Publique,\nnommer des schémas autre que Publique    explicitement dans Table    définitions.\nLe dialecte PostgreSQL peut refléter les tables de n’importe quel schéma. le\nTable.schema    argument, ou bien la\nMetaData.reflect.schema    l&#039;argument détermine quel schéma sera\nêtre recherché pour la ou les tables. Le reflété Table    objets\nconservera dans tous les cas cette .schéma    attribut comme spécifié.\nCependant, en ce qui concerne les tableaux que ces Table    les objets font référence à\nvia une contrainte de clé étrangère, une décision doit être prise quant à la .schéma\nest représenté dans ces tables distantes, dans le cas où cette distance\nnom de schéma est également un membre du courant\nChemin de recherche PostgreSQL.\nPar défaut, le dialecte PostgreSQL reproduit le comportement encouragé par\nPostgreSQL propre pg_get_constraintdef ()    procédure intégrée. Cette fonction\nrenvoie un exemple de définition pour une contrainte de clé étrangère particulière,\nomettant le nom de schéma référencé de cette définition lorsque le nom est\négalement dans le chemin de recherche du schéma PostgreSQL. L&#039;interaction ci-dessous\nillustre ce comportement:</p>"},{"id":"text-18","type":"text","heading":"","plain_text":"tester=&gt; CRÉER TABLE test_schema.référé(identifiant ENTIER PRIMAIRE CLÉ)\nCRÉER TABLE\ntester=&gt; CRÉER TABLE référant(\ntester(&gt;         identifiant ENTIER PRIMAIRE CLÉ,\ntester(&gt;         id_référé ENTIER RÉFÉRENCES test_schema.référé(identifiant));\nCRÉER TABLE\ntester=&gt; ENSEMBLE chemin_recherche À Publique, test_schema;\ntester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;\ntester-&gt; ;\n               pg_get_constraintdef\n-------------------------------------------------- -\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES référé(identifiant)\n(1 rangée)","html":"<p>tester=&gt; CRÉER TABLE test_schema.référé(identifiant ENTIER PRIMAIRE CLÉ)\nCRÉER TABLE\ntester=&gt; CRÉER TABLE référant(\ntester(&gt;         identifiant ENTIER PRIMAIRE CLÉ,\ntester(&gt;         id_référé ENTIER RÉFÉRENCES test_schema.référé(identifiant));\nCRÉER TABLE\ntester=&gt; ENSEMBLE chemin_recherche À Publique, test_schema;\ntester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#039;référant&#039; ET r.contype = &#039;F&#039;\ntester-&gt; ;\n               pg_get_constraintdef\n-------------------------------------------------- -\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES référé(identifiant)\n(1 rangée)</p>"},{"id":"text-19","type":"text","heading":"","plain_text":"Ci-dessus, nous avons créé une table référé    en tant que membre du schéma distant\ntest_schemaCependant, lorsque nous avons ajouté test_schema    à la\nPG chemin_recherche    et ensuite demandé pg_get_constraintdef ()    pour le\nÉTRANGER CLÉ    syntaxe, test_schema    n&#39;a pas été inclus dans la sortie de\nla fonction.\nD&#39;autre part, si nous redéfinissons le chemin de recherche sur la valeur par défaut typique\nde Publique:","html":"<p>Ci-dessus, nous avons créé une table référé    en tant que membre du schéma distant\ntest_schemaCependant, lorsque nous avons ajouté test_schema    à la\nPG chemin_recherche    et ensuite demandé pg_get_constraintdef ()    pour le\nÉTRANGER CLÉ    syntaxe, test_schema    n&#039;a pas été inclus dans la sortie de\nla fonction.\nD&#039;autre part, si nous redéfinissons le chemin de recherche sur la valeur par défaut typique\nde Publique:</p>"},{"id":"text-20","type":"text","heading":"","plain_text":"tester=&gt; ENSEMBLE chemin_recherche À Publique;\nENSEMBLE","html":"<p>tester=&gt; ENSEMBLE chemin_recherche À Publique;\nENSEMBLE</p>"},{"id":"text-21","type":"text","heading":"","plain_text":"La même requête contre pg_get_constraintdef ()    retourne maintenant complètement\nnom qualifié du schéma pour nous:","html":"<p>La même requête contre pg_get_constraintdef ()    retourne maintenant complètement\nnom qualifié du schéma pour nous:</p>"},{"id":"text-22","type":"text","heading":"","plain_text":"tester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;;\n                     pg_get_constraintdef\n-------------------------------------------------- -------------\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES test_schema.référé(identifiant)\n(1 rangée)","html":"<p>tester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#039;référant&#039; ET r.contype = &#039;F&#039;;\n                     pg_get_constraintdef\n-------------------------------------------------- -------------\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES test_schema.référé(identifiant)\n(1 rangée)</p>"},{"id":"text-23","type":"text","heading":"","plain_text":"SQLAlchemy utilisera par défaut la valeur de retour de pg_get_constraintdef ()\nafin de déterminer le nom du schéma distant. C’est-à-dire si notre chemin_recherche\nont été mis à inclure test_schemaet nous avons invoqué une table\nprocessus de réflexion comme suit:","html":"<p>SQLAlchemy utilisera par défaut la valeur de retour de pg_get_constraintdef ()\nafin de déterminer le nom du schéma distant. C’est-à-dire si notre chemin_recherche\nont été mis à inclure test_schemaet nous avons invoqué une table\nprocessus de réflexion comme suit:</p>"},{"id":"text-24","type":"text","heading":"","plain_text":"&gt;&gt;&gt; de sqlalchemy importation Table, MetaData, create_engine\n&gt;&gt;&gt; moteur = create_engine(&quot;postgresql: // scott: tiger @ localhost / test&quot;)\n&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta,\n...                       chargement automatique=Vrai, autoload_with=Connecticut)\n...","html":"<p>&gt;&gt;&gt; de sqlalchemy importation Table, MetaData, create_engine\n&gt;&gt;&gt; moteur = create_engine(&quot;postgresql: // scott: tiger @ localhost / test&quot;)\n&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#039;référant&#039;, méta,\n...                       chargement automatique=Vrai, autoload_with=Connecticut)\n...</p>"},{"id":"text-25","type":"text","heading":"","plain_text":"Le processus ci-dessus fournirait à la MetaData.tables    collection\nréféré    table nommée sans pour autant le schéma:","html":"<p>Le processus ci-dessus fournirait à la MetaData.tables    collection\nréféré    table nommée sans pour autant le schéma:</p>"},{"id":"text-26","type":"text","heading":"","plain_text":"&gt;&gt;&gt; méta.les tables[[[[&#39;référé&#39;].schéma est Aucun\nVrai","html":"<p>&gt;&gt;&gt; méta.les tables[[[[&#039;référé&#039;].schéma est Aucun\nVrai</p>"},{"id":"text-27","type":"text","heading":"","plain_text":"Pour modifier le comportement de la réflexion de sorte que le schéma référencé soit\nmaintenu indépendamment de la chemin_recherche    réglage, utilisez le\npostgresql_ignore_search_path    option, qui peut être spécifiée en tant que\nargument spécifique au dialecte à la fois Table    aussi bien que\nMetaData.reflect ():","html":"<p>Pour modifier le comportement de la réflexion de sorte que le schéma référencé soit\nmaintenu indépendamment de la chemin_recherche    réglage, utilisez le\npostgresql_ignore_search_path    option, qui peut être spécifiée en tant que\nargument spécifique au dialecte à la fois Table    aussi bien que\nMetaData.reflect ():</p>"},{"id":"text-28","type":"text","heading":"","plain_text":"&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta, chargement automatique=Vrai,\n...                       autoload_with=Connecticut,\n...                       postgresql_ignore_search_path=Vrai)\n...","html":"<p>&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#039;référant&#039;, méta, chargement automatique=Vrai,\n...                       autoload_with=Connecticut,\n...                       postgresql_ignore_search_path=Vrai)\n...</p>"},{"id":"text-29","type":"text","heading":"","plain_text":"Nous allons maintenant avoir test_schema.referred    stocké comme qualifié de schéma:","html":"<p>Nous allons maintenant avoir test_schema.referred    stocké comme qualifié de schéma:</p>"},{"id":"text-30","type":"text","heading":"","plain_text":"&gt;&gt;&gt; méta.les tables[[[[&#39;test_schema.referred&#39;].schéma\n&#39;test_schema&#39;","html":"<p>&gt;&gt;&gt; méta.les tables[[[[&#039;test_schema.referred&#039;].schéma\n&#039;test_schema&#039;</p>"},{"id":"text-31","type":"text","heading":"","plain_text":"Notez que dans tous les cas, le schéma «par défaut» est toujours reflété comme\nAucun. Le schéma «par défaut» sur PostgreSQL est celui qui est renvoyé par le\nPostgreSQL current_schema ()    une fonction. Sur un PostgreSQL typique\nl&#39;installation, c&#39;est le nom Publique. Donc, un tableau qui fait référence à un autre\nqui est dans le Publique    (c&#39;est-à-dire par défaut) le schéma aura toujours le\n.schéma    attribut mis à Aucun.","html":"<p>Notez que dans tous les cas, le schéma «par défaut» est toujours reflété comme\nAucun. Le schéma «par défaut» sur PostgreSQL est celui qui est renvoyé par le\nPostgreSQL current_schema ()    une fonction. Sur un PostgreSQL typique\nl&#039;installation, c&#039;est le nom Publique. Donc, un tableau qui fait référence à un autre\nqui est dans le Publique    (c&#039;est-à-dire par défaut) le schéma aura toujours le\n.schéma    attribut mis à Aucun.</p>"},{"id":"text-32","type":"text","heading":"","plain_text":"Nouveau dans la version 0.9.2: Ajouté le postgresql_ignore_search_path\noption dialecte acceptée par Table    et\nMetaData.reflect ().","html":"<p>Nouveau dans la version 0.9.2: Ajouté le postgresql_ignore_search_path\noption dialecte acceptée par Table    et\nMetaData.reflect ().</p>"},{"id":"text-33","type":"text","heading":"","plain_text":"INSERT / UPDATE… RETOURNER\nLe dialecte supporte les PG 8.2 INSERT..RECLINANT, MISE À JOUR..RECLINANT    et\nSUPPRIMER .. RETOURNER    syntaxes.   INSERT..RECLINANT    est utilisé par défaut\npour les instructions INSERT à une seule ligne afin d&#39;extraire les données nouvellement générées\nidentificateurs de clé primaire. Pour spécifier un explicite RETOUR    clause,\nUtilisez le _UpdateBase.returning ()    méthode par déclaration:","html":"<p>INSERT / UPDATE… RETOURNER\nLe dialecte supporte les PG 8.2 INSERT..RECLINANT, MISE À JOUR..RECLINANT    et\nSUPPRIMER .. RETOURNER    syntaxes.   INSERT..RECLINANT    est utilisé par défaut\npour les instructions INSERT à une seule ligne afin d&#039;extraire les données nouvellement générées\nidentificateurs de clé primaire. Pour spécifier un explicite RETOUR    clause,\nUtilisez le _UpdateBase.returning ()    méthode par déclaration:</p>"},{"id":"text-34","type":"text","heading":"","plain_text":"# INSERT..RETURNING\nrésultat = table.insérer().rentrant(table.c.col1, table.c.col2).\n    valeurs(Nom=&#39;foo&#39;)\nimpression résultat.fetchall()","html":"<p># INSERT..RETURNING\nrésultat = table.insérer().rentrant(table.c.col1, table.c.col2).\n    valeurs(Nom=&#039;foo&#039;)\nimpression résultat.fetchall()</p>"},{"id":"text-35","type":"text","heading":"","plain_text":"# UPDATE..RETURNING\nrésultat = table.mise à jour().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;).valeurs(Nom=&#39;bar&#39;)\nimpression résultat.fetchall()","html":"<p># UPDATE..RETURNING\nrésultat = table.mise à jour().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#039;foo&#039;).valeurs(Nom=&#039;bar&#039;)\nimpression résultat.fetchall()</p>"},{"id":"text-36","type":"text","heading":"","plain_text":"# DELETE..RETURNING\nrésultat = table.effacer().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;)\nimpression résultat.fetchall()","html":"<p># DELETE..RETURNING\nrésultat = table.effacer().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#039;foo&#039;)\nimpression résultat.fetchall()</p>"},{"id":"text-37","type":"text","heading":"","plain_text":"INSERT… SUR CONFLICT (Upsert)\nA partir de la version 9.5, PostgreSQL permet les «upserts» (mises à jour ou insert) de\nlignes dans une table via le SUR CONFLIT    clause de la INSÉRER    déclaration. UNE\nLa ligne candidate ne sera insérée que si cette ligne ne viole aucun code unique.\ncontraintes. Dans le cas d’une violation de contrainte unique, une action secondaire\npeut être soit “DO UPDATE”, indiquant que les données dans le fichier\nla ligne cible doit être mise à jour, ou “NE RIEN FAIRE”, ce qui indique de sauter silencieusement\ncette rangée.\nLes conflits sont déterminés à l&#39;aide de contraintes et d&#39;index uniques existants. Celles-ci\nles contraintes peuvent être identifiées en utilisant leur nom comme indiqué dans DDL,\nou ils peuvent être inféré en indiquant les colonnes et les conditions qui composent\nles index.\nSQLAlchemy fournit SUR CONFLIT    support via le spécifique PostgreSQL\npostgresql.dml.insert ()    fonction, qui fournit\nles méthodes génératives on_conflict_do_update ()\net on_conflict_do_nothing ():","html":"<p>INSERT… SUR CONFLICT (Upsert)\nA partir de la version 9.5, PostgreSQL permet les «upserts» (mises à jour ou insert) de\nlignes dans une table via le SUR CONFLIT    clause de la INSÉRER    déclaration. UNE\nLa ligne candidate ne sera insérée que si cette ligne ne viole aucun code unique.\ncontraintes. Dans le cas d’une violation de contrainte unique, une action secondaire\npeut être soit “DO UPDATE”, indiquant que les données dans le fichier\nla ligne cible doit être mise à jour, ou “NE RIEN FAIRE”, ce qui indique de sauter silencieusement\ncette rangée.\nLes conflits sont déterminés à l&#039;aide de contraintes et d&#039;index uniques existants. Celles-ci\nles contraintes peuvent être identifiées en utilisant leur nom comme indiqué dans DDL,\nou ils peuvent être inféré en indiquant les colonnes et les conditions qui composent\nles index.\nSQLAlchemy fournit SUR CONFLIT    support via le spécifique PostgreSQL\npostgresql.dml.insert ()    fonction, qui fournit\nles méthodes génératives on_conflict_do_update ()\net on_conflict_do_nothing ():</p>"},{"id":"text-38","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-39","type":"text","heading":"","plain_text":"insert_stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_existing_id&#39;,\n    Les données=&#39;valeur insérée&#39;)","html":"<p>insert_stmt = insérer(ma table).valeurs(\n    identifiant=&#039;some_existing_id&#039;,\n    Les données=&#039;valeur insérée&#039;)</p>"},{"id":"text-40","type":"text","heading":"","plain_text":"do_nothing_stmt = insert_stmt.on_conflict_do_nothing(\n    éléments_index=[[[[&#39;id&#39;]\n)","html":"<p>do_nothing_stmt = insert_stmt.on_conflict_do_nothing(\n    éléments_index=[[[[&#039;id&#039;]\n)</p>"},{"id":"text-41","type":"text","heading":"","plain_text":"Connecticut.exécuter(do_nothing_stmt)","html":"<p>Connecticut.exécuter(do_nothing_stmt)</p>"},{"id":"text-42","type":"text","heading":"","plain_text":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;pk_my_table&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","html":"<p>do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#039;pk_my_table&#039;,\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n)</p>"},{"id":"text-43","type":"text","heading":"","plain_text":"Connecticut.exécuter(do_update_stmt)","html":"<p>Connecticut.exécuter(do_update_stmt)</p>"},{"id":"text-44","type":"text","heading":"","plain_text":"Les deux méthodes fournissent la &quot;cible&quot; du conflit en utilisant soit la\ncontrainte nommée ou par inférence de colonne:","html":"<p>Les deux méthodes fournissent la &quot;cible&quot; du conflit en utilisant soit la\ncontrainte nommée ou par inférence de colonne:</p>"},{"id":"text-45","type":"text","heading":"","plain_text":"le Insert.on_conflict_do_update.index_elements    argument\nspécifie une séquence contenant des noms de colonne de chaîne, Colonne\ndes objets, et / ou des éléments d’expression SQL, qui identifieraient un unique\nindice:","html":"<p>le Insert.on_conflict_do_update.index_elements    argument\nspécifie une séquence contenant des noms de colonne de chaîne, Colonne\ndes objets, et / ou des éléments d’expression SQL, qui identifieraient un unique\nindice:</p>"},{"id":"text-46","type":"text","heading":"","plain_text":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","html":"<p>do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[&#039;id&#039;],\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n)</p>"},{"id":"text-47","type":"text","heading":"","plain_text":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.identifiant],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","html":"<p>do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.identifiant],\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n)</p>"},{"id":"text-48","type":"text","heading":"","plain_text":"Lors de l&#39;utilisation Insert.on_conflict_do_update.index_elements    à\ndéduire un index, un index partiel peut être déduit en spécifiant également le\nUtilisez le Insert.on_conflict_do_update.index_where    paramètre:","html":"<p>Lors de l&#039;utilisation Insert.on_conflict_do_update.index_elements    à\ndéduire un index, un index partiel peut être déduit en spécifiant également le\nUtilisez le Insert.on_conflict_do_update.index_where    paramètre:</p>"},{"id":"text-49","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-50","type":"text","heading":"","plain_text":"stmt = insérer(ma table).valeurs(utilisateur_email=&#39;a@b.com&#39;, Les données=&#39;données insérées&#39;)\nstmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.utilisateur_email],\n    index_where=ma table.c.utilisateur_email.comme(&#39;%@gmail.com&#39;),\n    ensemble_=dict(Les données=stmt.exclu.Les données)\n    )\nConnecticut.exécuter(stmt)","html":"<p>stmt = insérer(ma table).valeurs(utilisateur_email=&#039;a@b.com&#039;, Les données=&#039;données insérées&#039;)\nstmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.utilisateur_email],\n    index_where=ma table.c.utilisateur_email.comme(&#039;%@gmail.com&#039;),\n    ensemble_=dict(Les données=stmt.exclu.Les données)\n    )\nConnecticut.exécuter(stmt)</p>"},{"id":"text-51","type":"text","heading":"","plain_text":"le Insert.on_conflict_do_update.constraint    l&#39;argument est\nutilisé pour spécifier directement un index plutôt que de l&#39;inférer. Cela peut être\nle nom d&#39;une contrainte UNIQUE, d&#39;une contrainte PRIMARY KEY ou d&#39;un INDEX:","html":"<p>le Insert.on_conflict_do_update.constraint    l&#039;argument est\nutilisé pour spécifier directement un index plutôt que de l&#039;inférer. Cela peut être\nle nom d&#039;une contrainte UNIQUE, d&#039;une contrainte PRIMARY KEY ou d&#039;un INDEX:</p>"},{"id":"text-52","type":"text","heading":"","plain_text":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_idx_1&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","html":"<p>do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#039;ma_table_idx_1&#039;,\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n)</p>"},{"id":"text-53","type":"text","heading":"","plain_text":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_pk&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","html":"<p>do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#039;ma_table_pk&#039;,\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n)</p>"},{"id":"text-54","type":"text","heading":"","plain_text":"le Insert.on_conflict_do_update.constraint    argument peut\nse référer également à une construction SQLAlchemy représentant une contrainte,\npar exemple. Contrainte unique, PrimaryKeyConstraint,\nIndice, ou ExcludeConstraint. Dans cette utilisation,\nsi la contrainte a un nom, elle est utilisée directement. Sinon, si le\ncontrainte est non nommée, alors l’inférence sera utilisée, où les expressions\net la clause optionnelle WHERE de la contrainte sera précisée dans le\nconstruction. Cette utilisation est particulièrement pratique\nfaire référence à la clé primaire nommée ou non nommée d&#39;un Table    en utilisant le\nTable.primary_key    attribut:","html":"<p>le Insert.on_conflict_do_update.constraint    argument peut\nse référer également à une construction SQLAlchemy représentant une contrainte,\npar exemple. Contrainte unique, PrimaryKeyConstraint,\nIndice, ou ExcludeConstraint. Dans cette utilisation,\nsi la contrainte a un nom, elle est utilisée directement. Sinon, si le\ncontrainte est non nommée, alors l’inférence sera utilisée, où les expressions\net la clause optionnelle WHERE de la contrainte sera précisée dans le\nconstruction. Cette utilisation est particulièrement pratique\nfaire référence à la clé primaire nommée ou non nommée d&#039;un Table    en utilisant le\nTable.primary_key    attribut:</p>"},{"id":"text-55","type":"text","heading":"","plain_text":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=ma table.clé primaire,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)","html":"<p>do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=ma table.clé primaire,\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n)</p>"},{"id":"text-56","type":"text","heading":"","plain_text":"SUR CONFLIT ... FAIRE MISE À JOUR    est utilisé pour effectuer une mise à jour du déjà\nligne existante, en utilisant n&#39;importe quelle combinaison de nouvelles valeurs ainsi que de valeurs\nde l&#39;insertion proposée. Ces valeurs sont spécifiées à l&#39;aide du\nInsert.on_conflict_do_update.set_    paramètre. Cette\nparamètre accepte un dictionnaire composé de valeurs directes\npour UPDATE:","html":"<p>SUR CONFLIT ... FAIRE MISE À JOUR    est utilisé pour effectuer une mise à jour du déjà\nligne existante, en utilisant n&#039;importe quelle combinaison de nouvelles valeurs ainsi que de valeurs\nde l&#039;insertion proposée. Ces valeurs sont spécifiées à l&#039;aide du\nInsert.on_conflict_do_update.set_    paramètre. Cette\nparamètre accepte un dictionnaire composé de valeurs directes\npour UPDATE:</p>"},{"id":"text-57","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-58","type":"text","heading":"","plain_text":"stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n    )\nConnecticut.exécuter(do_update_stmt)","html":"<p>stmt = insérer(ma table).valeurs(identifiant=&#039;some_id&#039;, Les données=&#039;valeur insérée&#039;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#039;id&#039;],\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;)\n    )\nConnecticut.exécuter(do_update_stmt)</p>"},{"id":"text-59","type":"text","heading":"","plain_text":"Pour faire référence à la ligne d’insertion proposée, l’alias spécial\nexclu    est disponible en tant qu&#39;attribut sur\nle postgresql.dml.Insert    objet; cet objet est un\nColumnCollection    lequel alias contient toutes les colonnes de la cible\ntable:","html":"<p>Pour faire référence à la ligne d’insertion proposée, l’alias spécial\nexclu    est disponible en tant qu&#039;attribut sur\nle postgresql.dml.Insert    objet; cet objet est un\nColumnCollection    lequel alias contient toutes les colonnes de la cible\ntable:</p>"},{"id":"text-60","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-61","type":"text","heading":"","plain_text":"stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    )\nConnecticut.exécuter(do_update_stmt)","html":"<p>stmt = insérer(ma table).valeurs(\n    identifiant=&#039;some_id&#039;,\n    Les données=&#039;valeur insérée&#039;,\n    auteur=&#039;jlh&#039;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#039;id&#039;],\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;, auteur=stmt.exclu.auteur)\n    )\nConnecticut.exécuter(do_update_stmt)</p>"},{"id":"text-62","type":"text","heading":"","plain_text":"le Insert.on_conflict_do_update ()    méthode accepte également\nune clause WHERE utilisant le Insert.on_conflict_do_update.where\nparamètre, qui limitera les lignes qui reçoivent un UPDATE:","html":"<p>le Insert.on_conflict_do_update ()    méthode accepte également\nune clause WHERE utilisant le Insert.on_conflict_do_update.where\nparamètre, qui limitera les lignes qui reçoivent un UPDATE:</p>"},{"id":"text-63","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-64","type":"text","heading":"","plain_text":"stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\non_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    où=(ma table.c.statut == 2)\n    )\nConnecticut.exécuter(on_update_stmt)","html":"<p>stmt = insérer(ma table).valeurs(\n    identifiant=&#039;some_id&#039;,\n    Les données=&#039;valeur insérée&#039;,\n    auteur=&#039;jlh&#039;)\non_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#039;id&#039;],\n    ensemble_=dict(Les données=&#039;valeur mise à jour&#039;, auteur=stmt.exclu.auteur)\n    où=(ma table.c.statut == 2)\n    )\nConnecticut.exécuter(on_update_stmt)</p>"},{"id":"text-65","type":"text","heading":"","plain_text":"SUR CONFLIT    peut également être utilisé pour ignorer l&#39;insertion complète d&#39;une ligne\nen cas de conflit avec une contrainte unique ou d&#39;exclusion; au dessous de\nceci est illustré en utilisant le\non_conflict_do_nothing ()    méthode:","html":"<p>SUR CONFLIT    peut également être utilisé pour ignorer l&#039;insertion complète d&#039;une ligne\nen cas de conflit avec une contrainte unique ou d&#039;exclusion; au dessous de\nceci est illustré en utilisant le\non_conflict_do_nothing ()    méthode:</p>"},{"id":"text-66","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-67","type":"text","heading":"","plain_text":"stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing(éléments_index=[[[[&#39;id&#39;])\nConnecticut.exécuter(stmt)","html":"<p>stmt = insérer(ma table).valeurs(identifiant=&#039;some_id&#039;, Les données=&#039;valeur insérée&#039;)\nstmt = stmt.on_conflict_do_nothing(éléments_index=[[[[&#039;id&#039;])\nConnecticut.exécuter(stmt)</p>"},{"id":"text-68","type":"text","heading":"","plain_text":"Si FAIRE RIEN    est utilisé sans spécifier de colonne ou de contrainte,\nil a pour effet de sauter l&#39;INSERT pour toute exception unique ou d&#39;exclusion\nviolation de contrainte qui se produit:","html":"<p>Si FAIRE RIEN    est utilisé sans spécifier de colonne ou de contrainte,\nil a pour effet de sauter l&#039;INSERT pour toute exception unique ou d&#039;exclusion\nviolation de contrainte qui se produit:</p>"},{"id":"text-69","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation insérer","html":"<p>de sqlalchemy.dialects.postgresql importation insérer</p>"},{"id":"text-70","type":"text","heading":"","plain_text":"stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing()\nConnecticut.exécuter(stmt)","html":"<p>stmt = insérer(ma table).valeurs(identifiant=&#039;some_id&#039;, Les données=&#039;valeur insérée&#039;)\nstmt = stmt.on_conflict_do_nothing()\nConnecticut.exécuter(stmt)</p>"},{"id":"text-71","type":"text","heading":"","plain_text":"Nouveau dans la version 1.1: Ajout du support pour les clauses PostgreSQL ™ ON CONFLICT","html":"<p>Nouveau dans la version 1.1: Ajout du support pour les clauses PostgreSQL ™ ON CONFLICT</p>"},{"id":"text-72","type":"text","heading":"","plain_text":"Recherche en texte intégral\nSQLAlchemy met à disposition le PostgreSQL @@    opérateur via le\nColumnElement.match ()    méthode sur toute expression de colonne textuelle.\nSur un dialecte PostgreSQL, une expression comme celle-ci:","html":"<p>Recherche en texte intégral\nSQLAlchemy met à disposition le PostgreSQL @@    opérateur via le\nColumnElement.match ()    méthode sur toute expression de colonne textuelle.\nSur un dialecte PostgreSQL, une expression comme celle-ci:</p>"},{"id":"text-73","type":"text","heading":"","plain_text":"sélectionner([[[[quelque chose.c.texte.rencontre(&quot;chaîne de recherche&quot;)])","html":"<p>sélectionner([[[[quelque chose.c.texte.rencontre(&quot;chaîne de recherche&quot;)])</p>"},{"id":"text-74","type":"text","heading":"","plain_text":"émettra dans la base de données:","html":"<p>émettra dans la base de données:</p>"},{"id":"text-75","type":"text","heading":"","plain_text":"SÉLECTIONNER texte @@ to_tsquery(&#39;chaîne de recherche&#39;) DE table","html":"<p>SÉLECTIONNER texte @@ to_tsquery(&#039;chaîne de recherche&#039;) DE table</p>"},{"id":"text-76","type":"text","heading":"","plain_text":"Les fonctions de recherche de texte PostgreSQL telles que to_tsquery ()\net to_tsvector ()    sont disponibles\nen utilisant explicitement la norme func    construction. Par exemple:","html":"<p>Les fonctions de recherche de texte PostgreSQL telles que to_tsquery ()\net to_tsvector ()    sont disponibles\nen utilisant explicitement la norme func    construction. Par exemple:</p>"},{"id":"text-77","type":"text","heading":"","plain_text":"sélectionner([[[[\n    func.to_tsvector(&quot;les gros chats mangeaient des rats&quot;).rencontre(&#39;chat et rat&#39;)\n])","html":"<p>sélectionner([[[[\n    func.to_tsvector(&quot;les gros chats mangeaient des rats&quot;).rencontre(&#039;chat et rat&#039;)\n])</p>"},{"id":"text-78","type":"text","heading":"","plain_text":"Emet l&#39;équivalent de:","html":"<p>Emet l&#039;équivalent de:</p>"},{"id":"text-79","type":"text","heading":"","plain_text":"SÉLECTIONNER to_tsvector(&quot;les gros chats mangeaient des rats&quot;) @@ to_tsquery(&#39;chat et rat&#39;)","html":"<p>SÉLECTIONNER to_tsvector(&quot;les gros chats mangeaient des rats&quot;) @@ to_tsquery(&#039;chat et rat&#039;)</p>"},{"id":"text-80","type":"text","heading":"","plain_text":"le postgresql.TSVECTOR    type peut fournir des CAST explicites:","html":"<p>le postgresql.TSVECTOR    type peut fournir des CAST explicites:</p>"},{"id":"text-81","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation TSVECTOR\nde sqlalchemy importation sélectionner, jeter\nsélectionner([[[[jeter(&quot;Du texte&quot;, TSVECTOR)])","html":"<p>de sqlalchemy.dialects.postgresql importation TSVECTOR\nde sqlalchemy importation sélectionner, jeter\nsélectionner([[[[jeter(&quot;Du texte&quot;, TSVECTOR)])</p>"},{"id":"text-82","type":"text","heading":"","plain_text":"produit une déclaration équivalente à:","html":"<p>produit une déclaration équivalente à:</p>"},{"id":"text-83","type":"text","heading":"","plain_text":"SÉLECTIONNER JETER(&#39;Du texte&#39; COMME TSVECTOR) COMME anon_1","html":"<p>SÉLECTIONNER JETER(&#039;Du texte&#039; COMME TSVECTOR) COMME anon_1</p>"},{"id":"text-84","type":"text","heading":"","plain_text":"Les recherches en texte intégral dans PostgreSQL sont influencées par la combinaison de:\nParamètre PostgreSQL de default_text_search_config, le regconfig    utilisé\npour construire les index GIN / GiST, et le regconfig    éventuellement passé\nlors d&#39;une requête.\nLorsque vous effectuez une recherche en texte intégral sur une colonne comportant un code GIN ou\nIndex GiST déjà pré-calculé (qui est commun au texte intégral\nrecherches), il peut être nécessaire de passer explicitement à un serveur PostgreSQL spécifique.\nregconfig    valeur pour assurer que le planificateur de requêtes utilise l&#39;index et\nne pas recalculer la colonne à la demande.\nAfin de permettre cette planification explicite des requêtes, ou d’utiliser différentes\nstratégies de recherche, la rencontre    méthode accepte un postgresql_regconfig\nargument de mot clé:","html":"<p>Les recherches en texte intégral dans PostgreSQL sont influencées par la combinaison de:\nParamètre PostgreSQL de default_text_search_config, le regconfig    utilisé\npour construire les index GIN / GiST, et le regconfig    éventuellement passé\nlors d&#039;une requête.\nLorsque vous effectuez une recherche en texte intégral sur une colonne comportant un code GIN ou\nIndex GiST déjà pré-calculé (qui est commun au texte intégral\nrecherches), il peut être nécessaire de passer explicitement à un serveur PostgreSQL spécifique.\nregconfig    valeur pour assurer que le planificateur de requêtes utilise l&#039;index et\nne pas recalculer la colonne à la demande.\nAfin de permettre cette planification explicite des requêtes, ou d’utiliser différentes\nstratégies de recherche, la rencontre    méthode accepte un postgresql_regconfig\nargument de mot clé:</p>"},{"id":"text-85","type":"text","heading":"","plain_text":"sélectionner([[[[ma table.c.identifiant]).où(\n    ma table.c.Titre.rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n)","html":"<p>sélectionner([[[[ma table.c.identifiant]).où(\n    ma table.c.Titre.rencontre(&#039;somestring&#039;, postgresql_regconfig=&#039;Anglais&#039;)\n)</p>"},{"id":"text-86","type":"text","heading":"","plain_text":"Emet l&#39;équivalent de:","html":"<p>Emet l&#039;équivalent de:</p>"},{"id":"text-87","type":"text","heading":"","plain_text":"SÉLECTIONNER ma table.identifiant DE ma table\nOÙ ma table.Titre @@ to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)","html":"<p>SÉLECTIONNER ma table.identifiant DE ma table\nOÙ ma table.Titre @@ to_tsquery(&#039;Anglais&#039;, &#039;somestring&#039;)</p>"},{"id":"text-88","type":"text","heading":"","plain_text":"On peut aussi spécifiquement passer dans un «Regconfig» valeur à la\nto_tsvector ()    commande comme argument initial:","html":"<p>On peut aussi spécifiquement passer dans un «Regconfig» valeur à la\nto_tsvector ()    commande comme argument initial:</p>"},{"id":"text-89","type":"text","heading":"","plain_text":"sélectionner([[[[ma table.c.identifiant]).où(\n        func.to_tsvector(&#39;Anglais&#39;, ma table.c.Titre )\n        .rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n    )","html":"<p>sélectionner([[[[ma table.c.identifiant]).où(\n        func.to_tsvector(&#039;Anglais&#039;, ma table.c.Titre )\n        .rencontre(&#039;somestring&#039;, postgresql_regconfig=&#039;Anglais&#039;)\n    )</p>"},{"id":"text-90","type":"text","heading":"","plain_text":"produit une déclaration équivalente à:","html":"<p>produit une déclaration équivalente à:</p>"},{"id":"text-91","type":"text","heading":"","plain_text":"SÉLECTIONNER ma table.identifiant DE ma table\nOÙ to_tsvector(&#39;Anglais&#39;, ma table.Titre) @@\n    to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)","html":"<p>SÉLECTIONNER ma table.identifiant DE ma table\nOÙ to_tsvector(&#039;Anglais&#039;, ma table.Titre) @@\n    to_tsquery(&#039;Anglais&#039;, &#039;somestring&#039;)</p>"},{"id":"text-92","type":"text","heading":"","plain_text":"Il est recommandé d’utiliser le EXPLIQUE ANALYSER...    outil de\nPostgreSQL ™ pour vous assurer que vous générez des requêtes avec SQLAlchemy qui\nTirez pleinement parti des index que vous avez éventuellement créés pour la recherche en texte intégral.","html":"<p>Il est recommandé d’utiliser le EXPLIQUE ANALYSER...    outil de\nPostgreSQL ™ pour vous assurer que vous générez des requêtes avec SQLAlchemy qui\nTirez pleinement parti des index que vous avez éventuellement créés pour la recherche en texte intégral.</p>"},{"id":"text-93","type":"text","heading":"","plain_text":"DE SEULEMENT…\nLe dialecte prend en charge le mot clé ONLY de PostgreSQL pour ne cibler que certains\ntable dans une hiérarchie d&#39;héritage. Ceci peut être utilisé pour produire le\nSÉLECTIONNER ... DE SEULEMENT, MISE À JOUR SEULEMENT ..., et EFFACER DE SEULEMENT ...\nsyntaxes. Il utilise le mécanisme des astuces de SQLAlchemy:","html":"<p>DE SEULEMENT…\nLe dialecte prend en charge le mot clé ONLY de PostgreSQL pour ne cibler que certains\ntable dans une hiérarchie d&#039;héritage. Ceci peut être utilisé pour produire le\nSÉLECTIONNER ... DE SEULEMENT, MISE À JOUR SEULEMENT ..., et EFFACER DE SEULEMENT ...\nsyntaxes. Il utilise le mécanisme des astuces de SQLAlchemy:</p>"},{"id":"text-94","type":"text","heading":"","plain_text":"# SELECTIONNER ... A PARTIR DE ...\nrésultat = table.sélectionner().avec_hint(table, &#39;SEULEMENT&#39;, &#39;postgresql&#39;)\nimpression résultat.fetchall()","html":"<p># SELECTIONNER ... A PARTIR DE ...\nrésultat = table.sélectionner().avec_hint(table, &#039;SEULEMENT&#039;, &#039;postgresql&#039;)\nimpression résultat.fetchall()</p>"},{"id":"text-95","type":"text","heading":"","plain_text":"# MISE À JOUR UNIQUEMENT ...\ntable.mise à jour(valeurs=dict(foo=&#39;bar&#39;)).avec_hint(&#39;SEULEMENT&#39;,\n                                               nom du dialecte=&#39;postgresql&#39;)","html":"<p># MISE À JOUR UNIQUEMENT ...\ntable.mise à jour(valeurs=dict(foo=&#039;bar&#039;)).avec_hint(&#039;SEULEMENT&#039;,\n                                               nom du dialecte=&#039;postgresql&#039;)</p>"},{"id":"text-96","type":"text","heading":"","plain_text":"# SUPPRIMER DE SEULEMENT ...\ntable.effacer().avec_hint(&#39;SEULEMENT&#39;, nom du dialecte=&#39;postgresql&#39;)","html":"<p># SUPPRIMER DE SEULEMENT ...\ntable.effacer().avec_hint(&#039;SEULEMENT&#039;, nom du dialecte=&#039;postgresql&#039;)</p>"},{"id":"text-97","type":"text","heading":"","plain_text":"Options d&#39;index spécifiques à PostgreSQL\nPlusieurs extensions à la Indice    construct sont disponibles, spécifiques\nau dialecte PostgreSQL.","html":"<p>Options d&#039;index spécifiques à PostgreSQL\nPlusieurs extensions à la Indice    construct sont disponibles, spécifiques\nau dialecte PostgreSQL.</p>"},{"id":"text-98","type":"text","heading":"","plain_text":"Index partiels\nLes index partiels ajoutent un critère à la définition de l’index afin que celui-ci soit\nappliqué à un sous-ensemble de lignes. Ceux-ci peuvent être spécifiés sur Indice\nen utilisant le postgresql_where    argument de mot clé:","html":"<p>Index partiels\nLes index partiels ajoutent un critère à la définition de l’index afin que celui-ci soit\nappliqué à un sous-ensemble de lignes. Ceux-ci peuvent être spécifiés sur Indice\nen utilisant le postgresql_where    argument de mot clé:</p>"},{"id":"text-99","type":"text","heading":"","plain_text":"Indice(&#39;mon_index&#39;, ma table.c.identifiant, postgresql_where=ma table.c.valeur &gt; dix)","html":"<p>Indice(&#039;mon_index&#039;, ma table.c.identifiant, postgresql_where=ma table.c.valeur &gt; dix)</p>"},{"id":"text-100","type":"text","heading":"","plain_text":"Classes d&#39;opérateurs\nPostgreSQL permet la spécification d’un classe d&#39;opérateur pour chaque colonne de\nun index (voir\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).\nle Indice    la construction permet de les spécifier via le\npostgresql_ops    argument de mot clé:","html":"<p>Classes d&#039;opérateurs\nPostgreSQL permet la spécification d’un classe d&#039;opérateur pour chaque colonne de\nun index (voir\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).\nle Indice    la construction permet de les spécifier via le\npostgresql_ops    argument de mot clé:</p>"},{"id":"text-101","type":"text","heading":"","plain_text":"Indice(\n    &#39;mon_index&#39;, ma table.c.identifiant, ma table.c.Les données,\n    postgresql_ops=\n        &#39;Les données&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )","html":"<p>Indice(\n    &#039;mon_index&#039;, ma table.c.identifiant, ma table.c.Les données,\n    postgresql_ops=\n        &#039;Les données&#039;: &#039;text_pattern_ops&#039;,\n        &#039;id&#039;: &#039;int4_ops&#039;\n    )</p>"},{"id":"text-102","type":"text","heading":"","plain_text":"Notez que les clés dans le postgresql_ops    dictionnaire sont le nom “clé” de\nle Colonnec&#39;est-à-dire le nom utilisé pour y accéder depuis le .c\ncollection de Table, qui peut être configuré pour être différent de\nle nom réel de la colonne tel qu&#39;il est exprimé dans la base de données.\nSi postgresql_ops    doit être utilisé contre une expression SQL complexe telle que\nen tant qu&#39;appel de fonction, pour l&#39;appliquer à la colonne, il faut lui attribuer une étiquette\nqui est identifié dans le dictionnaire par son nom, par exemple:","html":"<p>Notez que les clés dans le postgresql_ops    dictionnaire sont le nom “clé” de\nle Colonnec&#039;est-à-dire le nom utilisé pour y accéder depuis le .c\ncollection de Table, qui peut être configuré pour être différent de\nle nom réel de la colonne tel qu&#039;il est exprimé dans la base de données.\nSi postgresql_ops    doit être utilisé contre une expression SQL complexe telle que\nen tant qu&#039;appel de fonction, pour l&#039;appliquer à la colonne, il faut lui attribuer une étiquette\nqui est identifié dans le dictionnaire par son nom, par exemple:</p>"},{"id":"text-103","type":"text","heading":"","plain_text":"Indice(\n    &#39;mon_index&#39;, ma table.c.identifiant,\n    func.inférieur(ma table.c.Les données).étiquette(&#39;data_lower&#39;),\n    postgresql_ops=\n        &#39;data_lower&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )","html":"<p>Indice(\n    &#039;mon_index&#039;, ma table.c.identifiant,\n    func.inférieur(ma table.c.Les données).étiquette(&#039;data_lower&#039;),\n    postgresql_ops=\n        &#039;data_lower&#039;: &#039;text_pattern_ops&#039;,\n        &#039;id&#039;: &#039;int4_ops&#039;\n    )</p>"},{"id":"text-104","type":"text","heading":"","plain_text":"Types d&#39;index\nPostgreSQL fournit plusieurs types d’index: B-Tree, Hash, GiST et GIN, ainsi que\ncomme la possibilité pour les utilisateurs de créer leurs propres projets (voir\nhttp://www.postgresql.org/docs/8.3/static/indexes-types.html). Ceux-ci peuvent être\nspécifié sur Indice    en utilisant le postgresql_using    argument de mot clé:","html":"<p>Types d&#039;index\nPostgreSQL fournit plusieurs types d’index: B-Tree, Hash, GiST et GIN, ainsi que\ncomme la possibilité pour les utilisateurs de créer leurs propres projets (voir\nhttp://www.postgresql.org/docs/8.3/static/indexes-types.html). Ceux-ci peuvent être\nspécifié sur Indice    en utilisant le postgresql_using    argument de mot clé:</p>"},{"id":"text-105","type":"text","heading":"","plain_text":"Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_using=&#39;Gin&#39;)","html":"<p>Indice(&#039;mon_index&#039;, ma table.c.Les données, postgresql_using=&#039;Gin&#039;)</p>"},{"id":"text-106","type":"text","heading":"","plain_text":"La valeur transmise à l’argument du mot clé sera simplement transmise au\ncommande CREATE INDEX sous-jacente, de sorte doit être un type d&#39;index valide pour votre\nversion de PostgreSQL.","html":"<p>La valeur transmise à l’argument du mot clé sera simplement transmise au\ncommande CREATE INDEX sous-jacente, de sorte doit être un type d&#039;index valide pour votre\nversion de PostgreSQL.</p>"},{"id":"text-107","type":"text","heading":"","plain_text":"Paramètres de stockage d&#39;index\nPostgreSQL permet de définir des paramètres de stockage sur des index. Le stockage\nles paramètres disponibles dépendent de la méthode d&#39;index utilisée par l&#39;index. Espace de rangement\nles paramètres peuvent être spécifiés sur Indice    en utilisant le postgresql_with\nargument de mot clé:","html":"<p>Paramètres de stockage d&#039;index\nPostgreSQL permet de définir des paramètres de stockage sur des index. Le stockage\nles paramètres disponibles dépendent de la méthode d&#039;index utilisée par l&#039;index. Espace de rangement\nles paramètres peuvent être spécifiés sur Indice    en utilisant le postgresql_with\nargument de mot clé:</p>"},{"id":"text-108","type":"text","heading":"","plain_text":"Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_with=&quot;facteur de remplissage&quot;: 50)","html":"<p>Indice(&#039;mon_index&#039;, ma table.c.Les données, postgresql_with=&quot;facteur de remplissage&quot;: 50)</p>"},{"id":"text-109","type":"text","heading":"","plain_text":"PostgreSQL permet de définir le tablespace dans lequel créer l&#39;index.\nLe tablespace peut être spécifié sur Indice    en utilisant le\npostgresql_tablespace    argument de mot clé:","html":"<p>PostgreSQL permet de définir le tablespace dans lequel créer l&#039;index.\nLe tablespace peut être spécifié sur Indice    en utilisant le\npostgresql_tablespace    argument de mot clé:</p>"},{"id":"text-110","type":"text","heading":"","plain_text":"Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_tablespace=&#39;mon espace de tables&#39;)","html":"<p>Indice(&#039;mon_index&#039;, ma table.c.Les données, postgresql_tablespace=&#039;mon espace de tables&#039;)</p>"},{"id":"text-111","type":"text","heading":"","plain_text":"Notez que la même option est disponible sur Table    ainsi que.","html":"<p>Notez que la même option est disponible sur Table    ainsi que.</p>"},{"id":"text-112","type":"text","heading":"","plain_text":"Index avec concurremment\nL’option d’index CONCORDREMENT de PostgreSQL est supportée en passant le\ndrapeau postgresql_concurrently    à la Indice    construction:","html":"<p>Index avec concurremment\nL’option d’index CONCORDREMENT de PostgreSQL est supportée en passant le\ndrapeau postgresql_concurrently    à la Indice    construction:</p>"},{"id":"text-113","type":"text","heading":"","plain_text":"tbl = Table(&#39;testtbl&#39;, m, Colonne(&#39;Les données&#39;, Entier))","html":"<p>tbl = Table(&#039;testtbl&#039;, m, Colonne(&#039;Les données&#039;, Entier))</p>"},{"id":"text-114","type":"text","heading":"","plain_text":"idx1 = Indice(&#39;test_idx1&#39;, tbl.c.Les données, postgresql_concurrently=Vrai)","html":"<p>idx1 = Indice(&#039;test_idx1&#039;, tbl.c.Les données, postgresql_concurrently=Vrai)</p>"},{"id":"text-115","type":"text","heading":"","plain_text":"La construction d’index ci-dessus rendra le DDL pour CREATE INDEX, en supposant que\nPostgreSQL 8.2 ou supérieur est détecté ou pour un dialecte sans connexion, comme:","html":"<p>La construction d’index ci-dessus rendra le DDL pour CREATE INDEX, en supposant que\nPostgreSQL 8.2 ou supérieur est détecté ou pour un dialecte sans connexion, comme:</p>"},{"id":"text-116","type":"text","heading":"","plain_text":"CRÉER INDICE De manière concurrente test_idx1 SUR testtbl (Les données)","html":"<p>CRÉER INDICE De manière concurrente test_idx1 SUR testtbl (Les données)</p>"},{"id":"text-117","type":"text","heading":"","plain_text":"Pour DROP INDEX, en supposant que PostgreSQL 9.2 ou supérieur soit détecté ou pour\nun dialecte sans connexion, il émettra:","html":"<p>Pour DROP INDEX, en supposant que PostgreSQL 9.2 ou supérieur soit détecté ou pour\nun dialecte sans connexion, il émettra:</p>"},{"id":"text-118","type":"text","heading":"","plain_text":"LAISSEZ TOMBER INDICE De manière concurrente test_idx1","html":"<p>LAISSEZ TOMBER INDICE De manière concurrente test_idx1</p>"},{"id":"text-119","type":"text","heading":"","plain_text":"Nouveau dans la version 1.1: support pour concurremment sur DROP INDEX. le\nLe mot clé est simultanément émis uniquement si une version suffisamment élevée\nde PostgreSQL est détecté sur la connexion (ou pour une connexion sans connexion)\ndialecte).","html":"<p>Nouveau dans la version 1.1: support pour concurremment sur DROP INDEX. le\nLe mot clé est simultanément émis uniquement si une version suffisamment élevée\nde PostgreSQL est détecté sur la connexion (ou pour une connexion sans connexion)\ndialecte).</p>"},{"id":"text-120","type":"text","heading":"","plain_text":"Lors de l&#39;utilisation concurrente, la base de données PostgreSQL requiert que l&#39;instruction\nêtre appelé en dehors d&#39;un bloc de transaction. La base de données Python DBAPI\nmême pour une seule déclaration, une transaction est présente, donc pour utiliser cette\nle mode «autocommit» de DBAPI doit être utilisé:","html":"<p>Lors de l&#039;utilisation concurrente, la base de données PostgreSQL requiert que l&#039;instruction\nêtre appelé en dehors d&#039;un bloc de transaction. La base de données Python DBAPI\nmême pour une seule déclaration, une transaction est présente, donc pour utiliser cette\nle mode «autocommit» de DBAPI doit être utilisé:</p>"},{"id":"text-121","type":"text","heading":"","plain_text":"métadonnées = MetaData()\ntable = Table(\n    &quot;foo&quot;, métadonnées,\n    Colonne(&quot;id&quot;, Chaîne))\nindice = Indice(\n    &quot;foo_idx&quot;, table.c.identifiant, postgresql_concurrently=Vrai)","html":"<p>métadonnées = MetaData()\ntable = Table(\n    &quot;foo&quot;, métadonnées,\n    Colonne(&quot;id&quot;, Chaîne))\nindice = Indice(\n    &quot;foo_idx&quot;, table.c.identifiant, postgresql_concurrently=Vrai)</p>"},{"id":"text-122","type":"text","heading":"","plain_text":"avec moteur.relier() comme Connecticut:\n    avec Connecticut.execution_options(niveau_isolement=&#39;AUTOCOMMIT&#39;):\n        table.créer(Connecticut)","html":"<p>avec moteur.relier() comme Connecticut:\n    avec Connecticut.execution_options(niveau_isolement=&#039;AUTOCOMMIT&#039;):\n        table.créer(Connecticut)</p>"},{"id":"text-123","type":"text","heading":"","plain_text":"PostgreSQL Index Reflection\nLa base de données PostgreSQL crée implicitement un INDEX UNIQUE chaque fois que le\nLa construction UNIQUE CONSTRAINT est utilisée. Lors de l&#39;inspection d&#39;une table en utilisant\nInspecteur, le Inspector.get_indexes ()\net le Inspector.get_unique_constraints ()    fera rapport sur ces\ndeux constructions distinctement; dans le cas de l&#39;index, la clé\nduplicates_constraint    sera présent dans l&#39;entrée d&#39;index s&#39;il est\ndétecté comme reflétant une contrainte. Lors de la réflexion en utilisant\nTable(..., autoload = True), l&#39;INDICE UNIQUE est ne pas revenu\ndans Table.indexes    quand il est détecté comme reflétant un\nContrainte unique    dans le Table.constraints    collection.","html":"<p>PostgreSQL Index Reflection\nLa base de données PostgreSQL crée implicitement un INDEX UNIQUE chaque fois que le\nLa construction UNIQUE CONSTRAINT est utilisée. Lors de l&#039;inspection d&#039;une table en utilisant\nInspecteur, le Inspector.get_indexes ()\net le Inspector.get_unique_constraints ()    fera rapport sur ces\ndeux constructions distinctement; dans le cas de l&#039;index, la clé\nduplicates_constraint    sera présent dans l&#039;entrée d&#039;index s&#039;il est\ndétecté comme reflétant une contrainte. Lors de la réflexion en utilisant\nTable(..., autoload = True), l&#039;INDICE UNIQUE est ne pas revenu\ndans Table.indexes    quand il est détecté comme reflétant un\nContrainte unique    dans le Table.constraints    collection.</p>"},{"id":"text-124","type":"text","heading":"","plain_text":"Modifié dans la version 1.0.0: &#8211; Table    la réflexion comprend maintenant\nContrainte unique    objets présents dans le Table.constraints\ncollection; le backend de PostgreSQL n&#39;inclura plus de “miroir”\nIndice    construire dans Table.indexes    si c&#39;est détecté\ncomme correspondant à une contrainte unique.","html":"<p>Modifié dans la version 1.0.0: &#8211; Table    la réflexion comprend maintenant\nContrainte unique    objets présents dans le Table.constraints\ncollection; le backend de PostgreSQL n&#039;inclura plus de “miroir”\nIndice    construire dans Table.indexes    si c&#039;est détecté\ncomme correspondant à une contrainte unique.</p>"},{"id":"text-125","type":"text","heading":"","plain_text":"Options de réflexion spéciales\nle Inspecteur    utilisé pour le backend PostgreSQL est une instance\nde PGInspector, qui offre des méthodes supplémentaires:","html":"<p>Options de réflexion spéciales\nle Inspecteur    utilisé pour le backend PostgreSQL est une instance\nde PGInspector, qui offre des méthodes supplémentaires:</p>"},{"id":"text-126","type":"text","heading":"","plain_text":"de sqlalchemy importation create_engine, inspecter","html":"<p>de sqlalchemy importation create_engine, inspecter</p>"},{"id":"text-127","type":"text","heading":"","plain_text":"moteur = create_engine(&quot;postgresql + psycopg2: // localhost / test&quot;)\ninsp = inspecter(moteur)  # sera un PGInspector","html":"<p>moteur = create_engine(&quot;postgresql + psycopg2: // localhost / test&quot;)\ninsp = inspecter(moteur)  # sera un PGInspector</p>"},{"id":"text-128","type":"text","heading":"","plain_text":"impression(insp.get_enums())","html":"<p>impression(insp.get_enums())</p>"},{"id":"text-129","type":"text","heading":"","plain_text":"classe sqlalchemy.dialects.postgresql.base.PGInspector(Connecticut)","html":"<p>classe sqlalchemy.dialects.postgresql.base.PGInspector(Connecticut)</p>"},{"id":"text-130","type":"text","heading":"","plain_text":"Bases: sqlalchemy.engine.reflection.Inspector","html":"<p>Bases: sqlalchemy.engine.reflection.Inspector</p>"},{"id":"text-131","type":"text","heading":"","plain_text":"get_enums(schéma = Aucun)","html":"<p>get_enums(schéma = Aucun)</p>"},{"id":"text-132","type":"text","heading":"","plain_text":"Retourne une liste d&#39;objets ENUM.\nChaque membre est un dictionnaire contenant ces champs:","html":"<p>Retourne une liste d&#039;objets ENUM.\nChaque membre est un dictionnaire contenant ces champs:</p>"},{"id":"text-133","type":"text","heading":"","plain_text":"name &#8211; nom de l&#39;énum","html":"<p>name &#8211; nom de l&#039;énum</p>"},{"id":"text-134","type":"text","heading":"","plain_text":"schéma &#8211; le nom du schéma pour l&#39;énumération.","html":"<p>schéma &#8211; le nom du schéma pour l&#039;énumération.</p>"},{"id":"text-135","type":"text","heading":"","plain_text":"visible &#8211; booléen, que cette énumération soit visible ou non\ndans le chemin de recherche par défaut.","html":"<p>visible &#8211; booléen, que cette énumération soit visible ou non\ndans le chemin de recherche par défaut.</p>"},{"id":"text-136","type":"text","heading":"","plain_text":"étiquettes &#8211; une liste d&#39;étiquettes de chaîne qui s&#39;appliquent à l&#39;énumération.","html":"<p>étiquettes &#8211; une liste d&#039;étiquettes de chaîne qui s&#039;appliquent à l&#039;énumération.</p>"},{"id":"text-137","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-138","type":"text","heading":"","plain_text":"schéma &#8211; nom du schéma. Si aucun, le schéma par défaut\n(généralement «public») est utilisé. Peut également être réglé sur &#39;*&#39; pour\nindiquez des énumérations de charge pour tous les schémas.","html":"<p>schéma &#8211; nom du schéma. Si aucun, le schéma par défaut\n(généralement «public») est utilisé. Peut également être réglé sur &#039;*&#039; pour\nindiquez des énumérations de charge pour tous les schémas.</p>"},{"id":"text-139","type":"text","heading":"","plain_text":"get_foreign_table_names(schéma = Aucun)","html":"<p>get_foreign_table_names(schéma = Aucun)</p>"},{"id":"text-140","type":"text","heading":"","plain_text":"Renvoie une liste de noms FOREIGN TABLE.\nLe comportement est similaire à celui de Inspector.get_table_names (),\nsauf que la liste est limitée aux tables qui signalent une\nrelâchement    valeur de F.","html":"<p>Renvoie une liste de noms FOREIGN TABLE.\nLe comportement est similaire à celui de Inspector.get_table_names (),\nsauf que la liste est limitée aux tables qui signalent une\nrelâchement    valeur de F.</p>"},{"id":"text-141","type":"text","heading":"","plain_text":"get_table_oid(nom de la table, schéma = Aucun)","html":"<p>get_table_oid(nom de la table, schéma = Aucun)</p>"},{"id":"text-142","type":"text","heading":"","plain_text":"Renvoie l&#39;OID du nom de la table donnée.","html":"<p>Renvoie l&#039;OID du nom de la table donnée.</p>"},{"id":"text-143","type":"text","heading":"","plain_text":"get_view_names(schéma = Aucun, include = (&#39;plain&#39;, &#39;matérialisé&#39;))","html":"<p>get_view_names(schéma = Aucun, include = (&#039;plain&#039;, &#039;matérialisé&#039;))</p>"},{"id":"text-144","type":"text","heading":"","plain_text":"Renvoyer tous les noms de vue dans schéma.","html":"<p>Renvoyer tous les noms de vue dans schéma.</p>"},{"id":"text-145","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-146","type":"text","heading":"","plain_text":"schéma &#8211; Facultatif, récupérez les noms d&#39;un schéma autre que celui par défaut.\nPour les devis spéciaux, utilisez quoted_name.","html":"<p>schéma &#8211; Facultatif, récupérez les noms d&#039;un schéma autre que celui par défaut.\nPour les devis spéciaux, utilisez quoted_name.</p>"},{"id":"text-147","type":"text","heading":"","plain_text":"comprendre &#8211; \nspécifier les types de vues à renvoyer. Passé\nsous forme de valeur de chaîne (pour un type unique) ou de tuple (pour un nombre quelconque)\nde types). Par défaut à (&#39;plaine&#39;, &#39;matérialisé&#39;).","html":"<p>comprendre &#8211; \nspécifier les types de vues à renvoyer. Passé\nsous forme de valeur de chaîne (pour un type unique) ou de tuple (pour un nombre quelconque)\nde types). Par défaut à (&#039;plaine&#039;, &#039;matérialisé&#039;).</p>"},{"id":"text-148","type":"text","heading":"","plain_text":"Options de la table PostgreSQL\nPlusieurs options pour CREATE TABLE sont supportées directement par PostgreSQL\ndialecte en conjonction avec le Table    construction:","html":"<p>Options de la table PostgreSQL\nPlusieurs options pour CREATE TABLE sont supportées directement par PostgreSQL\ndialecte en conjonction avec le Table    construction:</p>"},{"id":"text-149","type":"text","heading":"","plain_text":"Types de tableau\nLe dialecte PostgreSQL supporte les tableaux, à la fois en tant que types de colonne multidimensionnels\nainsi que des littéraux de tableau:","html":"<p>Types de tableau\nLe dialecte PostgreSQL supporte les tableaux, à la fois en tant que types de colonne multidimensionnels\nainsi que des littéraux de tableau:</p>"},{"id":"text-150","type":"text","heading":"","plain_text":"Types JSON\nLe dialecte PostgreSQL prend en charge les types de données JSON et JSONB, y compris\nLe support natif de psycopg2 et celui de tous les logiciels spéciaux de PostgreSQL\nles opérateurs:","html":"<p>Types JSON\nLe dialecte PostgreSQL prend en charge les types de données JSON et JSONB, y compris\nLe support natif de psycopg2 et celui de tous les logiciels spéciaux de PostgreSQL\nles opérateurs:</p>"},{"id":"text-151","type":"text","heading":"","plain_text":"Type HSTORE\nLe type HSTORE PostgreSQL ainsi que les littéraux hstore sont pris en charge:","html":"<p>Type HSTORE\nLe type HSTORE PostgreSQL ainsi que les littéraux hstore sont pris en charge:</p>"},{"id":"text-152","type":"text","heading":"","plain_text":"Types ENUM\nPostgreSQL a une structure TYPE pouvant être créée indépendamment qui est utilisée\npour implémenter un type énuméré. Cette approche introduit des\nla complexité du côté SQLAlchemy en termes de quand ce type devrait être\nCréé et abandonné. Le type object est aussi un reflet indépendant\nentité. Les sections suivantes doivent être consultées:","html":"<p>Types ENUM\nPostgreSQL a une structure TYPE pouvant être créée indépendamment qui est utilisée\npour implémenter un type énuméré. Cette approche introduit des\nla complexité du côté SQLAlchemy en termes de quand ce type devrait être\nCréé et abandonné. Le type object est aussi un reflet indépendant\nentité. Les sections suivantes doivent être consultées:</p>"},{"id":"text-153","type":"text","heading":"","plain_text":"Utiliser ENUM avec ARRAY\nLa combinaison de ENUM et ARRAY n’est pas directement prise en charge par le backend\nDBAPIs à ce moment. Pour envoyer et recevoir un ARRAY of ENUM,\nutilisez le type de solution de contournement suivant, qui décore le\npostgresql.ARRAY    Type de données.","html":"<p>Utiliser ENUM avec ARRAY\nLa combinaison de ENUM et ARRAY n’est pas directement prise en charge par le backend\nDBAPIs à ce moment. Pour envoyer et recevoir un ARRAY of ENUM,\nutilisez le type de solution de contournement suivant, qui décore le\npostgresql.ARRAY    Type de données.</p>"},{"id":"text-154","type":"text","heading":"","plain_text":"de sqlalchemy importation TypeDécorateur\nde sqlalchemy.dialects.postgresql importation Tableau","html":"<p>de sqlalchemy importation TypeDécorateur\nde sqlalchemy.dialects.postgresql importation Tableau</p>"},{"id":"text-155","type":"text","heading":"","plain_text":"classe ArrayOfEnum(TypeDécorateur):\n    impl = Tableau","html":"<p>classe ArrayOfEnum(TypeDécorateur):\n    impl = Tableau</p>"},{"id":"text-156","type":"text","heading":"","plain_text":"def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)","html":"<p>def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)</p>"},{"id":"text-157","type":"text","heading":"","plain_text":"def result_processor(soi, dialecte, coltype):\n        super_rp = super(ArrayOfEnum, soi).result_processor(\n            dialecte, coltype)","html":"<p>def result_processor(soi, dialecte, coltype):\n        super_rp = super(ArrayOfEnum, soi).result_processor(\n            dialecte, coltype)</p>"},{"id":"text-158","type":"text","heading":"","plain_text":"def handle_raw_string(valeur):\n            interne = ré.rencontre(r&quot;^ (. *) $&quot;, valeur).groupe(1)\n            revenir interne.Divisé(&quot;,&quot;) si interne autre []","html":"<p>def handle_raw_string(valeur):\n            interne = ré.rencontre(r&quot;^ (. *) $&quot;, valeur).groupe(1)\n            revenir interne.Divisé(&quot;,&quot;) si interne autre []</p>"},{"id":"text-159","type":"text","heading":"","plain_text":"def processus(valeur):\n            si valeur est Aucun:\n                revenir Aucun\n            revenir super_rp(handle_raw_string(valeur))\n        revenir processus","html":"<p>def processus(valeur):\n            si valeur est Aucun:\n                revenir Aucun\n            revenir super_rp(handle_raw_string(valeur))\n        revenir processus</p>"},{"id":"text-160","type":"text","heading":"","plain_text":"Par exemple.:","html":"<p>Par exemple.:</p>"},{"id":"text-161","type":"text","heading":"","plain_text":"Table(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, ArrayOfEnum(ENUM(&#39;une&#39;, &#39;b&#39;c&#39;, nom =&#39;myenum&#39;)))","html":"<p>Table(\n    &#039;mes données&#039;, métadonnées,\n    Colonne(&#039;id&#039;, Entier, clé primaire=Vrai),\n    Colonne(&#039;Les données&#039;, ArrayOfEnum(ENUM(&#039;une&#039;, &#039;b&#039;c&#039;, nom =&#039;myenum&#039;)))</p>"},{"id":"text-162","type":"text","heading":"","plain_text":")","html":"<p>)</p>"},{"id":"text-163","type":"text","heading":"","plain_text":"Ce type n&#39;est pas inclus en tant que type intégré car il serait incompatible\navec une DBAPI qui décide soudainement de soutenir ARRAY of ENUM directement dans\nune nouvelle version.","html":"<p>Ce type n&#039;est pas inclus en tant que type intégré car il serait incompatible\navec une DBAPI qui décide soudainement de soutenir ARRAY of ENUM directement dans\nune nouvelle version.</p>"},{"id":"text-164","type":"text","heading":"","plain_text":"Utilisation de JSON / JSONB avec ARRAY\nSemblable à utiliser ENUM, pour un ARRAY of JSON / JSONB, nous devons rendre le\nCAST approprié, cependant les pilotes psycopg2 actuels semblent gérer le résultat\npour ARRAY of JSON automatiquement, le type est donc plus simple:","html":"<p>Utilisation de JSON / JSONB avec ARRAY\nSemblable à utiliser ENUM, pour un ARRAY of JSON / JSONB, nous devons rendre le\nCAST approprié, cependant les pilotes psycopg2 actuels semblent gérer le résultat\npour ARRAY of JSON automatiquement, le type est donc plus simple:</p>"},{"id":"text-165","type":"text","heading":"","plain_text":"classe CastingArray(Tableau):\n    def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)","html":"<p>classe CastingArray(Tableau):\n    def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)</p>"},{"id":"text-166","type":"text","heading":"","plain_text":"Par exemple.:","html":"<p>Par exemple.:</p>"},{"id":"text-167","type":"text","heading":"","plain_text":"Table(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, CastingArray(JSONB))\n)","html":"<p>Table(\n    &#039;mes données&#039;, métadonnées,\n    Colonne(&#039;id&#039;, Entier, clé primaire=Vrai),\n    Colonne(&#039;Les données&#039;, CastingArray(JSONB))\n)</p>"},{"id":"text-168","type":"text","heading":"","plain_text":"Types de données PostgreSQL\nComme avec tous les dialectes SQLAlchemy, tous les types UPPERCASE connus pour être\nvalables avec PostgreSQL sont importables à partir du dialecte de niveau supérieur, que ce soit\nils proviennent de sqlalchemy.types    ou du dialecte local:","html":"<p>Types de données PostgreSQL\nComme avec tous les dialectes SQLAlchemy, tous les types UPPERCASE connus pour être\nvalables avec PostgreSQL sont importables à partir du dialecte de niveau supérieur, que ce soit\nils proviennent de sqlalchemy.types    ou du dialecte local:</p>"},{"id":"text-169","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation \n    Tableau, BIGINT, BIT, BOOLÉAN, BYTEA, CARBONISER, CIDR, DATE, \n    DOUBLE PRECISION, ENUM, FLOTTE, HSTORE, INET, ENTIER, \n    INTERVALLE, JSON, JSONB, MACADDR, ARGENT, NUMERIC, OID, REAL, SMALLINT, TEXT, \n    TEMPS, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \n    DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR","html":"<p>de sqlalchemy.dialects.postgresql importation \n    Tableau, BIGINT, BIT, BOOLÉAN, BYTEA, CARBONISER, CIDR, DATE, \n    DOUBLE PRECISION, ENUM, FLOTTE, HSTORE, INET, ENTIER, \n    INTERVALLE, JSON, JSONB, MACADDR, ARGENT, NUMERIC, OID, REAL, SMALLINT, TEXT, \n    TEMPS, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \n    DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR</p>"},{"id":"text-170","type":"text","heading":"","plain_text":"Types which are specific to PostgreSQL, or have PostgreSQL-specific\nconstruction arguments, are as follows:","html":"<p>Types which are specific to PostgreSQL, or have PostgreSQL-specific\nconstruction arguments, are as follows:</p>"},{"id":"text-171","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.aggregate_order_by(cible, *order_by)","html":"<p>class sqlalchemy.dialects.postgresql.aggregate_order_by(cible, *order_by)</p>"},{"id":"text-172","type":"text","heading":"","plain_text":"Bases: sqlalchemy.sql.expression.ColumnElement\nRepresent a PostgreSQL aggregate order by expression.\nE.g.:","html":"<p>Bases: sqlalchemy.sql.expression.ColumnElement\nRepresent a PostgreSQL aggregate order by expression.\nE.g.:</p>"},{"id":"text-173","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation aggregate_order_by\nexpr = func.array_agg(aggregate_order_by(table.c.une, table.c.b.desc()))\nstmt = sélectionner([[[[expr])","html":"<p>de sqlalchemy.dialects.postgresql importation aggregate_order_by\nexpr = func.array_agg(aggregate_order_by(table.c.une, table.c.b.desc()))\nstmt = sélectionner([[[[expr])</p>"},{"id":"text-174","type":"text","heading":"","plain_text":"would represent the expression:","html":"<p>would represent the expression:</p>"},{"id":"text-175","type":"text","heading":"","plain_text":"SELECT array_agg(une ORDER BY b DESC) FROM table;","html":"<p>SELECT array_agg(une ORDER BY b DESC) FROM table;</p>"},{"id":"text-176","type":"text","heading":"","plain_text":"Similarly:","html":"<p>Similarly:</p>"},{"id":"text-177","type":"text","heading":"","plain_text":"expr = func.string_agg(\n    table.c.une,\n    aggregate_order_by(literal_column(&quot;&#39;,&#39;&quot;), table.c.une)\n)\nstmt = sélectionner([[[[expr])","html":"<p>expr = func.string_agg(\n    table.c.une,\n    aggregate_order_by(literal_column(&quot;&#039;,&#039;&quot;), table.c.une)\n)\nstmt = sélectionner([[[[expr])</p>"},{"id":"text-178","type":"text","heading":"","plain_text":"Would represent:","html":"<p>Would represent:</p>"},{"id":"text-179","type":"text","heading":"","plain_text":"SELECT string_agg(une, &#39;,&#39; ORDER BY une) FROM table;","html":"<p>SELECT string_agg(une, &#039;,&#039; ORDER BY une) FROM table;</p>"},{"id":"text-180","type":"text","heading":"","plain_text":"Changed in version 1.2.13: &#8211; the ORDER BY argument may be multiple terms","html":"<p>Changed in version 1.2.13: &#8211; the ORDER BY argument may be multiple terms</p>"},{"id":"text-181","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.array(clauses, **kw)","html":"<p>class sqlalchemy.dialects.postgresql.array(clauses, **kw)</p>"},{"id":"text-182","type":"text","heading":"","plain_text":"Bases: sqlalchemy.sql.expression.Tuple\nA PostgreSQL ARRAY literal.\nThis is used to produce ARRAY literals in SQL expressions, e.g.:","html":"<p>Bases: sqlalchemy.sql.expression.Tuple\nA PostgreSQL ARRAY literal.\nThis is used to produce ARRAY literals in SQL expressions, e.g.:</p>"},{"id":"text-183","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation array\nde sqlalchemy.dialects importation postgresql\nde sqlalchemy importation sélectionner, func","html":"<p>de sqlalchemy.dialects.postgresql importation array\nde sqlalchemy.dialects importation postgresql\nde sqlalchemy importation sélectionner, func</p>"},{"id":"text-184","type":"text","heading":"","plain_text":"stmt = sélectionner([[[[\n                array([[[[1,2]) + array([[[[3,4,5])\n            ])","html":"<p>stmt = sélectionner([[[[\n                array([[[[1,2]) + array([[[[3,4,5])\n            ])</p>"},{"id":"text-185","type":"text","heading":"","plain_text":"impression(stmt.compile(dialect=postgresql.dialect()))","html":"<p>impression(stmt.compile(dialect=postgresql.dialect()))</p>"},{"id":"text-186","type":"text","heading":"","plain_text":"Produces the SQL:","html":"<p>Produces the SQL:</p>"},{"id":"text-187","type":"text","heading":"","plain_text":"SELECT ARRAY[[[[%(param_1)s, %(param_2)s] ||\n    ARRAY[[[[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1","html":"<p>SELECT ARRAY[[[[%(param_1)s, %(param_2)s] ||\n    ARRAY[[[[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1</p>"},{"id":"text-188","type":"text","heading":"","plain_text":"An instance of array    will always have the datatype\nARRAY. The “inner” type of the array is inferred from\nthe values present, unless the type_    keyword argument is passed:","html":"<p>An instance of array    will always have the datatype\nARRAY. The “inner” type of the array is inferred from\nthe values present, unless the type_    keyword argument is passed:</p>"},{"id":"text-189","type":"text","heading":"","plain_text":"array([[[[&#39;foo&#39;, &#39;bar&#39;], type_=CHAR)","html":"<p>array([[[[&#039;foo&#039;, &#039;bar&#039;], type_=CHAR)</p>"},{"id":"text-190","type":"text","heading":"","plain_text":"Multidimensional arrays are produced by nesting array    constructs.\nThe dimensionality of the final ARRAY    type is calculated by\nrecursively adding the dimensions of the inner ARRAY    type:","html":"<p>Multidimensional arrays are produced by nesting array    constructs.\nThe dimensionality of the final ARRAY    type is calculated by\nrecursively adding the dimensions of the inner ARRAY    type:</p>"},{"id":"text-191","type":"text","heading":"","plain_text":"stmt = sélectionner([[[[\n    array([[[[\n        array([[[[1, 2]), array([[[[3, 4]), array([[[[colonne(&#39;q&#39;), colonne(&#39;x&#39;)])\n    ])\n])\nimpression(stmt.compile(dialect=postgresql.dialect()))","html":"<p>stmt = sélectionner([[[[\n    array([[[[\n        array([[[[1, 2]), array([[[[3, 4]), array([[[[colonne(&#039;q&#039;), colonne(&#039;x&#039;)])\n    ])\n])\nimpression(stmt.compile(dialect=postgresql.dialect()))</p>"},{"id":"text-192","type":"text","heading":"","plain_text":"Produces:","html":"<p>Produces:</p>"},{"id":"text-193","type":"text","heading":"","plain_text":"SELECT ARRAY[[[[ARRAY[[[[%(param_1)s, %(param_2)s],\nARRAY[[[[%(param_3)s, %(param_4)s], ARRAY[[[[q, x]] AS anon_1","html":"<p>SELECT ARRAY[[[[ARRAY[[[[%(param_1)s, %(param_2)s],\nARRAY[[[[%(param_3)s, %(param_4)s], ARRAY[[[[q, x]] AS anon_1</p>"},{"id":"text-194","type":"text","heading":"","plain_text":"New in version 1.3.6: added support for multidimensional array literals","html":"<p>New in version 1.3.6: added support for multidimensional array literals</p>"},{"id":"text-195","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)","html":"<p>class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)</p>"},{"id":"text-196","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.ARRAY\nPostgreSQL ARRAY type.\nle postgresql.ARRAY    type is constructed in the same way\nas the core types.ARRAY    type; a member type is required, and a\nnumber of dimensions is recommended if the type is to be used for more\nthan one dimension:","html":"<p>Bases: sqlalchemy.types.ARRAY\nPostgreSQL ARRAY type.\nle postgresql.ARRAY    type is constructed in the same way\nas the core types.ARRAY    type; a member type is required, and a\nnumber of dimensions is recommended if the type is to be used for more\nthan one dimension:</p>"},{"id":"text-197","type":"text","heading":"","plain_text":"de sqlalchemy.dialects importation postgresql","html":"<p>de sqlalchemy.dialects importation postgresql</p>"},{"id":"text-198","type":"text","heading":"","plain_text":"mytable = Table(&quot;mytable&quot;, metadata,\n        Column(&quot;data&quot;, postgresql.ARRAY(Integer, dimensions=2))\n    )","html":"<p>mytable = Table(&quot;mytable&quot;, metadata,\n        Column(&quot;data&quot;, postgresql.ARRAY(Integer, dimensions=2))\n    )</p>"},{"id":"text-199","type":"text","heading":"","plain_text":"le postgresql.ARRAY    type provides all operations defined on the\ncore types.ARRAY    type, including support for “dimensions”,\nindexed access, and simple matching such as\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().  postgresql.ARRAY    class also\nprovides PostgreSQL-specific methods for containment operations, including\npostgresql.ARRAY.Comparator.contains()\npostgresql.ARRAY.Comparator.contained_by(), et\npostgresql.ARRAY.Comparator.overlap(), e.g.:","html":"<p>le postgresql.ARRAY    type provides all operations defined on the\ncore types.ARRAY    type, including support for “dimensions”,\nindexed access, and simple matching such as\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().  postgresql.ARRAY    class also\nprovides PostgreSQL-specific methods for containment operations, including\npostgresql.ARRAY.Comparator.contains()\npostgresql.ARRAY.Comparator.contained_by(), et\npostgresql.ARRAY.Comparator.overlap(), e.g.:</p>"},{"id":"text-200","type":"text","heading":"","plain_text":"mytable.c.Les données.contient([[[[1, 2])","html":"<p>mytable.c.Les données.contient([[[[1, 2])</p>"},{"id":"text-201","type":"text","heading":"","plain_text":"le postgresql.ARRAY    type may not be supported on all\nPostgreSQL DBAPIs; it is currently known to work on psycopg2 only.\nDe plus, le postgresql.ARRAY    type does not work directly in\nconjunction with the ENUM    type.  For a workaround, see the\nspecial type at Using ENUM with ARRAY.","html":"<p>le postgresql.ARRAY    type may not be supported on all\nPostgreSQL DBAPIs; it is currently known to work on psycopg2 only.\nDe plus, le postgresql.ARRAY    type does not work directly in\nconjunction with the ENUM    type.  For a workaround, see the\nspecial type at Using ENUM with ARRAY.</p>"},{"id":"text-202","type":"text","heading":"","plain_text":"class Comparator(expr)","html":"<p>class Comparator(expr)</p>"},{"id":"text-203","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Comparator\nDefine comparison operations for ARRAY.\nNote that these operations are in addition to those provided\nby the base types.ARRAY.Comparator    class, including\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().","html":"<p>Bases: sqlalchemy.types.Comparator\nDefine comparison operations for ARRAY.\nNote that these operations are in addition to those provided\nby the base types.ARRAY.Comparator    class, including\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().</p>"},{"id":"text-204","type":"text","heading":"","plain_text":"contained_by(other)","html":"<p>contained_by(other)</p>"},{"id":"text-205","type":"text","heading":"","plain_text":"Boolean expression.  Test if elements are a proper subset of the\nelements of the argument array expression.","html":"<p>Boolean expression.  Test if elements are a proper subset of the\nelements of the argument array expression.</p>"},{"id":"text-206","type":"text","heading":"","plain_text":"contient(other, **kwargs)","html":"<p>contient(other, **kwargs)</p>"},{"id":"text-207","type":"text","heading":"","plain_text":"Boolean expression.  Test if elements are a superset of the\nelements of the argument array expression.","html":"<p>Boolean expression.  Test if elements are a superset of the\nelements of the argument array expression.</p>"},{"id":"text-208","type":"text","heading":"","plain_text":"overlap(other)","html":"<p>overlap(other)</p>"},{"id":"text-209","type":"text","heading":"","plain_text":"Boolean expression.  Test if array has elements in common with\nan argument array expression.","html":"<p>Boolean expression.  Test if array has elements in common with\nan argument array expression.</p>"},{"id":"text-210","type":"text","heading":"","plain_text":"__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)","html":"<p>__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)</p>"},{"id":"text-211","type":"text","heading":"","plain_text":"Construct an ARRAY.\nE.g.:","html":"<p>Construct an ARRAY.\nE.g.:</p>"},{"id":"text-212","type":"text","heading":"","plain_text":"Column(&#39;myarray&#39;, ARRAY(Integer))","html":"<p>Column(&#039;myarray&#039;, ARRAY(Integer))</p>"},{"id":"text-213","type":"text","heading":"","plain_text":"Arguments are:","html":"<p>Arguments are:</p>"},{"id":"text-214","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-215","type":"text","heading":"","plain_text":"item_type – The data type of items of this array. Note that\ndimensionality is irrelevant here, so multi-dimensional arrays like\nINTEGER[][], are constructed as ARRAY(Integer), not as\nARRAY(ARRAY(Integer))    or such.","html":"<p>item_type – The data type of items of this array. Note that\ndimensionality is irrelevant here, so multi-dimensional arrays like\nINTEGER[][], are constructed as ARRAY(Integer), not as\nARRAY(ARRAY(Integer))    or such.</p>"},{"id":"text-216","type":"text","heading":"","plain_text":"as_tuple=False – Specify whether return results\nshould be converted to tuples from lists. DBAPIs such\nas psycopg2 return lists by default. When tuples are\nreturned, the results are hashable.","html":"<p>as_tuple=False – Specify whether return results\nshould be converted to tuples from lists. DBAPIs such\nas psycopg2 return lists by default. When tuples are\nreturned, the results are hashable.</p>"},{"id":"text-217","type":"text","heading":"","plain_text":"dimensions – if non-None, the ARRAY will assume a fixed\nnumber of dimensions.  This will cause the DDL emitted for this\nARRAY to include the exact number of bracket clauses [],\nand will also optimize the performance of the type overall.\nNote that PG arrays are always implicitly “non-dimensioned”,\nmeaning they can store any number of dimensions no matter how\nthey were declared.","html":"<p>dimensions – if non-None, the ARRAY will assume a fixed\nnumber of dimensions.  This will cause the DDL emitted for this\nARRAY to include the exact number of bracket clauses [],\nand will also optimize the performance of the type overall.\nNote that PG arrays are always implicitly “non-dimensioned”,\nmeaning they can store any number of dimensions no matter how\nthey were declared.</p>"},{"id":"text-218","type":"text","heading":"","plain_text":"zero_indexes=False &#8211; \nwhen True, index values will be converted\nbetween Python zero-based and PostgreSQL one-based indexes, e.g.\na value of one will be added to all index values before passing\nto the database.","html":"<p>zero_indexes=False &#8211; \nwhen True, index values will be converted\nbetween Python zero-based and PostgreSQL one-based indexes, e.g.\na value of one will be added to all index values before passing\nto the database.</p>"},{"id":"text-219","type":"text","heading":"","plain_text":"sqlalchemy.dialects.postgresql.array_agg(*arg, **kw)","html":"<p>sqlalchemy.dialects.postgresql.array_agg(*arg, **kw)</p>"},{"id":"text-220","type":"text","heading":"","plain_text":"PostgreSQL-specific form of array_agg, ensures\nreturn type is postgresql.ARRAY    and not\nthe plain types.ARRAY, unless an explicit type_\nis passed.","html":"<p>PostgreSQL-specific form of array_agg, ensures\nreturn type is postgresql.ARRAY    and not\nthe plain types.ARRAY, unless an explicit type_\nis passed.</p>"},{"id":"text-221","type":"text","heading":"","plain_text":"sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=)","html":"<p>sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=)</p>"},{"id":"text-222","type":"text","heading":"","plain_text":"A synonym for the ARRAY.Comparator.any()    method.\nThis method is legacy and is here for backwards-compatibility.","html":"<p>A synonym for the ARRAY.Comparator.any()    method.\nThis method is legacy and is here for backwards-compatibility.</p>"},{"id":"text-223","type":"text","heading":"","plain_text":"sqlalchemy.dialects.postgresql.Tout(other, arrexpr, operator=)","html":"<p>sqlalchemy.dialects.postgresql.Tout(other, arrexpr, operator=)</p>"},{"id":"text-224","type":"text","heading":"","plain_text":"A synonym for the ARRAY.Comparator.all()    method.\nThis method is legacy and is here for backwards-compatibility.","html":"<p>A synonym for the ARRAY.Comparator.all()    method.\nThis method is legacy and is here for backwards-compatibility.</p>"},{"id":"text-225","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)","html":"<p>class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)</p>"},{"id":"text-226","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine","html":"<p>Bases: sqlalchemy.types.TypeEngine</p>"},{"id":"text-227","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.BYTEA(length=None)","html":"<p>class sqlalchemy.dialects.postgresql.BYTEA(length=None)</p>"},{"id":"text-228","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.LargeBinary","html":"<p>Bases: sqlalchemy.types.LargeBinary</p>"},{"id":"text-229","type":"text","heading":"","plain_text":"__init__(length=None)","html":"<p>__init__(length=None)</p>"},{"id":"text-230","type":"text","heading":"","plain_text":"Construct a LargeBinary type.","html":"<p>Construct a LargeBinary type.</p>"},{"id":"text-231","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-232","type":"text","heading":"","plain_text":"length – optional, a length for the column for use in\nDDL statements, for those binary types that accept a length,\nsuch as the MySQL BLOB type.","html":"<p>length – optional, a length for the column for use in\nDDL statements, for those binary types that accept a length,\nsuch as the MySQL BLOB type.</p>"},{"id":"text-233","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.CIDR","html":"<p>class sqlalchemy.dialects.postgresql.CIDR</p>"},{"id":"text-234","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine","html":"<p>Bases: sqlalchemy.types.TypeEngine</p>"},{"id":"text-235","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None)","html":"<p>class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None)</p>"},{"id":"text-236","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Float","html":"<p>Bases: sqlalchemy.types.Float</p>"},{"id":"text-237","type":"text","heading":"","plain_text":"__init__(precision=None, asdecimal=False, decimal_return_scale=None)","html":"<p>__init__(precision=None, asdecimal=False, decimal_return_scale=None)</p>"},{"id":"text-238","type":"text","heading":"","plain_text":"Construct a Float.","html":"<p>Construct a Float.</p>"},{"id":"text-239","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-240","type":"text","heading":"","plain_text":"précision – the numeric precision for use in DDL CREATE\nTABLE.","html":"<p>précision – the numeric precision for use in DDL CREATE\nTABLE.</p>"},{"id":"text-241","type":"text","heading":"","plain_text":"asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.","html":"<p>asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.</p>"},{"id":"text-242","type":"text","heading":"","plain_text":"decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.","html":"<p>decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.</p>"},{"id":"text-243","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)","html":"<p>class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)</p>"},{"id":"text-244","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum\nPostgreSQL ENUM type.\nThis is a subclass of types.Enum    which includes\nsupport for PG’s CREATE TYPE    et DROP TYPE.\nWhen the builtin type types.Enum    is used and the\nEnum.native_enum    flag is left at its default of\nTrue, the PostgreSQL backend will use a postgresql.ENUM\ntype as the implementation, so the special create/drop rules\nwill be used.\nThe create/drop behavior of ENUM is necessarily intricate, due to the\nawkward relationship the ENUM type has in relationship to the\nparent table, in that it may be “owned” by just a single table, or\nmay be shared among many tables.\nWhen using types.Enum    ou postgresql.ENUM\nin an “inline” fashion, the CREATE TYPE    et DROP TYPE    is emitted\ncorresponding to when the Table.create()    et Table.drop()\nmethods are called:","html":"<p>Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum\nPostgreSQL ENUM type.\nThis is a subclass of types.Enum    which includes\nsupport for PG’s CREATE TYPE    et DROP TYPE.\nWhen the builtin type types.Enum    is used and the\nEnum.native_enum    flag is left at its default of\nTrue, the PostgreSQL backend will use a postgresql.ENUM\ntype as the implementation, so the special create/drop rules\nwill be used.\nThe create/drop behavior of ENUM is necessarily intricate, due to the\nawkward relationship the ENUM type has in relationship to the\nparent table, in that it may be “owned” by just a single table, or\nmay be shared among many tables.\nWhen using types.Enum    ou postgresql.ENUM\nin an “inline” fashion, the CREATE TYPE    et DROP TYPE    is emitted\ncorresponding to when the Table.create()    et Table.drop()\nmethods are called:</p>"},{"id":"text-245","type":"text","heading":"","plain_text":"table = Table(&#39;sometable&#39;, metadata,\n    Column(&#39;some_enum&#39;, ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;))\n)","html":"<p>table = Table(&#039;sometable&#039;, metadata,\n    Column(&#039;some_enum&#039;, ENUM(&#039;a&#039;, &#039;b&#039;, &#039;c&#039;, name=&#039;myenum&#039;))\n)</p>"},{"id":"text-246","type":"text","heading":"","plain_text":"table.create(engine)  # will emit CREATE ENUM and CREATE TABLE\ntable.drop(engine)  # will emit DROP TABLE and DROP ENUM","html":"<p>table.create(engine)  # will emit CREATE ENUM and CREATE TABLE\ntable.drop(engine)  # will emit DROP TABLE and DROP ENUM</p>"},{"id":"text-247","type":"text","heading":"","plain_text":"To use a common enumerated type between multiple tables, the best\npractice is to declare the types.Enum    ou\npostgresql.ENUM    independently, and associate it with the\nMetaData    object itself:","html":"<p>To use a common enumerated type between multiple tables, the best\npractice is to declare the types.Enum    ou\npostgresql.ENUM    independently, and associate it with the\nMetaData    object itself:</p>"},{"id":"text-248","type":"text","heading":"","plain_text":"my_enum = ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;, metadata=metadata)","html":"<p>my_enum = ENUM(&#039;a&#039;, &#039;b&#039;, &#039;c&#039;, name=&#039;myenum&#039;, metadata=metadata)</p>"},{"id":"text-249","type":"text","heading":"","plain_text":"t1 = Table(&#39;sometable_one&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)","html":"<p>t1 = Table(&#039;sometable_one&#039;, metadata,\n    Column(&#039;some_enum&#039;, myenum)\n)</p>"},{"id":"text-250","type":"text","heading":"","plain_text":"t2 = Table(&#39;sometable_two&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)","html":"<p>t2 = Table(&#039;sometable_two&#039;, metadata,\n    Column(&#039;some_enum&#039;, myenum)\n)</p>"},{"id":"text-251","type":"text","heading":"","plain_text":"When this pattern is used, care must still be taken at the level\nof individual table creates.  Emitting CREATE TABLE without also\nspecifying checkfirst=True    will still cause issues:","html":"<p>When this pattern is used, care must still be taken at the level\nof individual table creates.  Emitting CREATE TABLE without also\nspecifying checkfirst=True    will still cause issues:</p>"},{"id":"text-252","type":"text","heading":"","plain_text":"t1.create(engine) # will fail: no such type &#39;myenum&#39;","html":"<p>t1.create(engine) # will fail: no such type &#039;myenum&#039;</p>"},{"id":"text-253","type":"text","heading":"","plain_text":"If we specify checkfirst=True, the individual table-level create\noperation will check for the ENUM    and create if not exists:","html":"<p>If we specify checkfirst=True, the individual table-level create\noperation will check for the ENUM    and create if not exists:</p>"},{"id":"text-254","type":"text","heading":"","plain_text":"# will check if enum exists, and emit CREATE TYPE if not\nt1.create(engine, checkfirst=True)","html":"<p># will check if enum exists, and emit CREATE TYPE if not\nt1.create(engine, checkfirst=True)</p>"},{"id":"text-255","type":"text","heading":"","plain_text":"When using a metadata-level ENUM type, the type will always be created\nand dropped if either the metadata-wide create/drop is called:","html":"<p>When using a metadata-level ENUM type, the type will always be created\nand dropped if either the metadata-wide create/drop is called:</p>"},{"id":"text-256","type":"text","heading":"","plain_text":"metadata.create_all(engine)  # will emit CREATE TYPE\nmetadata.drop_all(engine)  # will emit DROP TYPE","html":"<p>metadata.create_all(engine)  # will emit CREATE TYPE\nmetadata.drop_all(engine)  # will emit DROP TYPE</p>"},{"id":"text-257","type":"text","heading":"","plain_text":"The type can also be created and dropped directly:","html":"<p>The type can also be created and dropped directly:</p>"},{"id":"text-258","type":"text","heading":"","plain_text":"my_enum.create(engine)\nmy_enum.drop(engine)","html":"<p>my_enum.create(engine)\nmy_enum.drop(engine)</p>"},{"id":"text-259","type":"text","heading":"","plain_text":"Changed in version 1.0.0: The PostgreSQL postgresql.ENUM    type\nnow behaves more strictly with regards to CREATE/DROP.  A metadata-level\nENUM type will only be created and dropped at the metadata level,\nnot the table level, with the exception of\ntable.create(checkfirst=True).\nle table.drop()    call will now emit a DROP TYPE for a table-level\nenumerated type.","html":"<p>Changed in version 1.0.0: The PostgreSQL postgresql.ENUM    type\nnow behaves more strictly with regards to CREATE/DROP.  A metadata-level\nENUM type will only be created and dropped at the metadata level,\nnot the table level, with the exception of\ntable.create(checkfirst=True).\nle table.drop()    call will now emit a DROP TYPE for a table-level\nenumerated type.</p>"},{"id":"text-260","type":"text","heading":"","plain_text":"__init__(*enums, **kw)","html":"<p>__init__(*enums, **kw)</p>"},{"id":"text-261","type":"text","heading":"","plain_text":"Construct an ENUM.\nArguments are the same as that of\ntypes.Enum, but also including\nthe following parameters.","html":"<p>Construct an ENUM.\nArguments are the same as that of\ntypes.Enum, but also including\nthe following parameters.</p>"},{"id":"text-262","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-263","type":"text","heading":"","plain_text":"create_type – Defaults to True.\nIndicates that CREATE TYPE    should be\nemitted, after optionally checking for the\npresence of the type, when the parent\ntable is being created; and additionally\ncette DROP TYPE    is called when the table\nis dropped. Quand Faux, no check\nwill be performed and no CREATE TYPE\nou DROP TYPE    is emitted, unless\ncreate()\nou drop()\nare called directly.\nSetting to Faux    is helpful\nwhen invoking a creation scheme to a SQL file\nwithout access to the actual database &#8211;\nle create()    et\ndrop()    methods can\nbe used to emit SQL to a target bind.","html":"<p>create_type – Defaults to True.\nIndicates that CREATE TYPE    should be\nemitted, after optionally checking for the\npresence of the type, when the parent\ntable is being created; and additionally\ncette DROP TYPE    is called when the table\nis dropped. Quand Faux, no check\nwill be performed and no CREATE TYPE\nou DROP TYPE    is emitted, unless\ncreate()\nou drop()\nare called directly.\nSetting to Faux    is helpful\nwhen invoking a creation scheme to a SQL file\nwithout access to the actual database &#8211;\nle create()    et\ndrop()    methods can\nbe used to emit SQL to a target bind.</p>"},{"id":"text-264","type":"text","heading":"","plain_text":"create(bind=None, checkfirst=True)","html":"<p>create(bind=None, checkfirst=True)</p>"},{"id":"text-265","type":"text","heading":"","plain_text":"Émettre CREATE TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL CREATE TYPE, no action is taken.","html":"<p>Émettre CREATE TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL CREATE TYPE, no action is taken.</p>"},{"id":"text-266","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-267","type":"text","heading":"","plain_text":"bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.","html":"<p>bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.</p>"},{"id":"text-268","type":"text","heading":"","plain_text":"checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type does not exist already before\ncreating.","html":"<p>checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type does not exist already before\ncreating.</p>"},{"id":"text-269","type":"text","heading":"","plain_text":"drop(bind=None, checkfirst=True)","html":"<p>drop(bind=None, checkfirst=True)</p>"},{"id":"text-270","type":"text","heading":"","plain_text":"Émettre DROP TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL DROP TYPE, no action is taken.","html":"<p>Émettre DROP TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL DROP TYPE, no action is taken.</p>"},{"id":"text-271","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-272","type":"text","heading":"","plain_text":"bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.","html":"<p>bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL.</p>"},{"id":"text-273","type":"text","heading":"","plain_text":"checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type actually exists before dropping.","html":"<p>checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type actually exists before dropping.</p>"},{"id":"text-274","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.HSTORE(text_type=None)","html":"<p>class sqlalchemy.dialects.postgresql.HSTORE(text_type=None)</p>"},{"id":"text-275","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Indexable, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL HSTORE type.\nle HSTORE    type stores dictionaries containing strings, e.g.:","html":"<p>Bases: sqlalchemy.types.Indexable, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL HSTORE type.\nle HSTORE    type stores dictionaries containing strings, e.g.:</p>"},{"id":"text-276","type":"text","heading":"","plain_text":"data_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, HSTORE)\n)","html":"<p>data_table = Table(&#039;data_table&#039;, metadata,\n    Column(&#039;id&#039;, Integer, primary_key=True),\n    Column(&#039;data&#039;, HSTORE)\n)</p>"},{"id":"text-277","type":"text","heading":"","plain_text":"avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )","html":"<p>avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )</p>"},{"id":"text-278","type":"text","heading":"","plain_text":"HSTORE    provides for a wide range of operations, including:","html":"<p>HSTORE    provides for a wide range of operations, including:</p>"},{"id":"text-279","type":"text","heading":"","plain_text":"Index operations:","html":"<p>Index operations:</p>"},{"id":"text-280","type":"text","heading":"","plain_text":"data_table.c.Les données[[[[&#39;some key&#39;] == &#39;some value&#39;","html":"<p>data_table.c.Les données[[[[&#039;some key&#039;] == &#039;some value&#039;</p>"},{"id":"text-281","type":"text","heading":"","plain_text":"Containment operations:","html":"<p>Containment operations:</p>"},{"id":"text-282","type":"text","heading":"","plain_text":"data_table.c.Les données.has_key(&#39;some key&#39;)","html":"<p>data_table.c.Les données.has_key(&#039;some key&#039;)</p>"},{"id":"text-283","type":"text","heading":"","plain_text":"data_table.c.Les données.has_all([[[[&#39;one&#39;, &#39;two&#39;, &#39;three&#39;])","html":"<p>data_table.c.Les données.has_all([[[[&#039;one&#039;, &#039;two&#039;, &#039;three&#039;])</p>"},{"id":"text-284","type":"text","heading":"","plain_text":"Concatenation:","html":"<p>Concatenation:</p>"},{"id":"text-285","type":"text","heading":"","plain_text":"data_table.c.Les données + &quot;k1&quot;: &quot;v1&quot;","html":"<p>data_table.c.Les données + &quot;k1&quot;: &quot;v1&quot;</p>"},{"id":"text-286","type":"text","heading":"","plain_text":"For a full list of special methods see\nHSTORE.comparator_factory.\nFor usage with the SQLAlchemy ORM, it may be desirable to combine\nthe usage of HSTORE    avec MutableDict    dictionary\nnow part of the sqlalchemy.ext.mutable\nextension.  This extension will allow “in-place” changes to the\ndictionary, e.g. addition of new keys or replacement/removal of existing\nkeys to/from the current dictionary, to produce events which will be\ndetected by the unit of work:","html":"<p>For a full list of special methods see\nHSTORE.comparator_factory.\nFor usage with the SQLAlchemy ORM, it may be desirable to combine\nthe usage of HSTORE    avec MutableDict    dictionary\nnow part of the sqlalchemy.ext.mutable\nextension.  This extension will allow “in-place” changes to the\ndictionary, e.g. addition of new keys or replacement/removal of existing\nkeys to/from the current dictionary, to produce events which will be\ndetected by the unit of work:</p>"},{"id":"text-287","type":"text","heading":"","plain_text":"de sqlalchemy.ext.mutable importation MutableDict","html":"<p>de sqlalchemy.ext.mutable importation MutableDict</p>"},{"id":"text-288","type":"text","heading":"","plain_text":"class MyClass(Base):\n    __tablename__ = &#39;data_table&#39;","html":"<p>class MyClass(Base):\n    __tablename__ = &#039;data_table&#039;</p>"},{"id":"text-289","type":"text","heading":"","plain_text":"identifiant = Column(Integer, primary_key=True)\n    Les données = Column(MutableDict.as_mutable(HSTORE))","html":"<p>identifiant = Column(Integer, primary_key=True)\n    Les données = Column(MutableDict.as_mutable(HSTORE))</p>"},{"id":"text-290","type":"text","heading":"","plain_text":"my_object = session.query(MyClass).un()","html":"<p>my_object = session.query(MyClass).un()</p>"},{"id":"text-291","type":"text","heading":"","plain_text":"# in-place mutation, requires Mutable extension\n# in order for the ORM to detect\nmy_object.Les données[[[[&#39;some_key&#39;] = &#39;some value&#39;","html":"<p># in-place mutation, requires Mutable extension\n# in order for the ORM to detect\nmy_object.Les données[[[[&#039;some_key&#039;] = &#039;some value&#039;</p>"},{"id":"text-292","type":"text","heading":"","plain_text":"session.commit()","html":"<p>session.commit()</p>"},{"id":"text-293","type":"text","heading":"","plain_text":"When the sqlalchemy.ext.mutable    extension is not used, the ORM\nwill not be alerted to any changes to the contents of an existing\ndictionary, unless that dictionary value is re-assigned to the\nHSTORE-attribute itself, thus generating a change event.","html":"<p>When the sqlalchemy.ext.mutable    extension is not used, the ORM\nwill not be alerted to any changes to the contents of an existing\ndictionary, unless that dictionary value is re-assigned to the\nHSTORE-attribute itself, thus generating a change event.</p>"},{"id":"text-294","type":"text","heading":"","plain_text":"Voir également\nhstore    &#8211; render the PostgreSQL hstore()    une fonction.","html":"<p>Voir également\nhstore    &#8211; render the PostgreSQL hstore()    une fonction.</p>"},{"id":"text-295","type":"text","heading":"","plain_text":"class Comparator(expr)","html":"<p>class Comparator(expr)</p>"},{"id":"text-296","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator\nDefine comparison operations for HSTORE.","html":"<p>Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator\nDefine comparison operations for HSTORE.</p>"},{"id":"text-297","type":"text","heading":"","plain_text":"array()","html":"<p>array()</p>"},{"id":"text-298","type":"text","heading":"","plain_text":"Text array expression.  Returns array of alternating keys and\nvaleurs.","html":"<p>Text array expression.  Returns array of alternating keys and\nvaleurs.</p>"},{"id":"text-299","type":"text","heading":"","plain_text":"contained_by(other)","html":"<p>contained_by(other)</p>"},{"id":"text-300","type":"text","heading":"","plain_text":"Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.","html":"<p>Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.</p>"},{"id":"text-301","type":"text","heading":"","plain_text":"contient(other, **kwargs)","html":"<p>contient(other, **kwargs)</p>"},{"id":"text-302","type":"text","heading":"","plain_text":"Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.","html":"<p>Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.</p>"},{"id":"text-303","type":"text","heading":"","plain_text":"defined(clé)","html":"<p>defined(clé)</p>"},{"id":"text-304","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of a non-NULL value for\nthe key.  Note that the key may be a SQLA expression.","html":"<p>Boolean expression.  Test for presence of a non-NULL value for\nthe key.  Note that the key may be a SQLA expression.</p>"},{"id":"text-305","type":"text","heading":"","plain_text":"effacer(clé)","html":"<p>effacer(clé)</p>"},{"id":"text-306","type":"text","heading":"","plain_text":"HStore expression.  Returns the contents of this hstore with the\ngiven key deleted.  Note that the key may be a SQLA expression.","html":"<p>HStore expression.  Returns the contents of this hstore with the\ngiven key deleted.  Note that the key may be a SQLA expression.</p>"},{"id":"text-307","type":"text","heading":"","plain_text":"has_all(other)","html":"<p>has_all(other)</p>"},{"id":"text-308","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of all keys in jsonb","html":"<p>Boolean expression.  Test for presence of all keys in jsonb</p>"},{"id":"text-309","type":"text","heading":"","plain_text":"has_any(other)","html":"<p>has_any(other)</p>"},{"id":"text-310","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of any key in jsonb","html":"<p>Boolean expression.  Test for presence of any key in jsonb</p>"},{"id":"text-311","type":"text","heading":"","plain_text":"has_key(other)","html":"<p>has_key(other)</p>"},{"id":"text-312","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.","html":"<p>Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.</p>"},{"id":"text-313","type":"text","heading":"","plain_text":"keys()","html":"<p>keys()</p>"},{"id":"text-314","type":"text","heading":"","plain_text":"Text array expression.  Returns array of keys.","html":"<p>Text array expression.  Returns array of keys.</p>"},{"id":"text-315","type":"text","heading":"","plain_text":"matrix()","html":"<p>matrix()</p>"},{"id":"text-316","type":"text","heading":"","plain_text":"Text array expression.  Returns array of [key, value] pairs.","html":"<p>Text array expression.  Returns array of [key, value] pairs.</p>"},{"id":"text-317","type":"text","heading":"","plain_text":"slice(array)","html":"<p>slice(array)</p>"},{"id":"text-318","type":"text","heading":"","plain_text":"HStore expression.  Returns a subset of an hstore defined by\narray of keys.","html":"<p>HStore expression.  Returns a subset of an hstore defined by\narray of keys.</p>"},{"id":"text-319","type":"text","heading":"","plain_text":"vals()","html":"<p>vals()</p>"},{"id":"text-320","type":"text","heading":"","plain_text":"Text array expression.  Returns array of values.","html":"<p>Text array expression.  Returns array of values.</p>"},{"id":"text-321","type":"text","heading":"","plain_text":"__init__(text_type=None)","html":"<p>__init__(text_type=None)</p>"},{"id":"text-322","type":"text","heading":"","plain_text":"Construct a new HSTORE.","html":"<p>Construct a new HSTORE.</p>"},{"id":"text-323","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-324","type":"text","heading":"","plain_text":"text_type &#8211; \nthe type that should be used for indexed values.\nDefaults to types.Text.","html":"<p>text_type &#8211; \nthe type that should be used for indexed values.\nDefaults to types.Text.</p>"},{"id":"text-325","type":"text","heading":"","plain_text":"bind_processor(dialect)","html":"<p>bind_processor(dialect)</p>"},{"id":"text-326","type":"text","heading":"","plain_text":"Return a conversion function for processing bind values.\nReturns a callable which will receive a bind parameter value\nas the sole positional argument and will return a value to\nsend to the DB-API.\nIf processing is not necessary, the method should return None.","html":"<p>Return a conversion function for processing bind values.\nReturns a callable which will receive a bind parameter value\nas the sole positional argument and will return a value to\nsend to the DB-API.\nIf processing is not necessary, the method should return None.</p>"},{"id":"text-327","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-328","type":"text","heading":"","plain_text":"dialect – Dialect instance in use.","html":"<p>dialect – Dialect instance in use.</p>"},{"id":"text-329","type":"text","heading":"","plain_text":"comparator_factory","html":"<p>comparator_factory</p>"},{"id":"text-330","type":"text","heading":"","plain_text":"alias of HSTORE.Comparator","html":"<p>alias of HSTORE.Comparator</p>"},{"id":"text-331","type":"text","heading":"","plain_text":"result_processor(dialect, coltype)","html":"<p>result_processor(dialect, coltype)</p>"},{"id":"text-332","type":"text","heading":"","plain_text":"Return a conversion function for processing result row values.\nReturns a callable which will receive a result row column\nvalue as the sole positional argument and will return a value\nto return to the user.\nIf processing is not necessary, the method should return None.","html":"<p>Return a conversion function for processing result row values.\nReturns a callable which will receive a result row column\nvalue as the sole positional argument and will return a value\nto return to the user.\nIf processing is not necessary, the method should return None.</p>"},{"id":"text-333","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-334","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)","html":"<p>class sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)</p>"},{"id":"text-335","type":"text","heading":"","plain_text":"Bases: sqlalchemy.sql.functions.GenericFunction\nConstruct an hstore value within a SQL expression using the\nPostgreSQL hstore()    une fonction.\nle hstore    function accepts one or two arguments as described\nin the PostgreSQL documentation.\nE.g.:","html":"<p>Bases: sqlalchemy.sql.functions.GenericFunction\nConstruct an hstore value within a SQL expression using the\nPostgreSQL hstore()    une fonction.\nle hstore    function accepts one or two arguments as described\nin the PostgreSQL documentation.\nE.g.:</p>"},{"id":"text-336","type":"text","heading":"","plain_text":"de sqlalchemy.dialects.postgresql importation array, hstore","html":"<p>de sqlalchemy.dialects.postgresql importation array, hstore</p>"},{"id":"text-337","type":"text","heading":"","plain_text":"sélectionner([[[[hstore(&#39;key1&#39;, &#39;value1&#39;)])","html":"<p>sélectionner([[[[hstore(&#039;key1&#039;, &#039;value1&#039;)])</p>"},{"id":"text-338","type":"text","heading":"","plain_text":"sélectionner([[[[\n        hstore(\n            array([[[[&#39;key1&#39;, &#39;key2&#39;, &#39;key3&#39;]),\n            array([[[[&#39;value1&#39;, &#39;value2&#39;, &#39;value3&#39;])\n        )\n    ])","html":"<p>sélectionner([[[[\n        hstore(\n            array([[[[&#039;key1&#039;, &#039;key2&#039;, &#039;key3&#039;]),\n            array([[[[&#039;value1&#039;, &#039;value2&#039;, &#039;value3&#039;])\n        )\n    ])</p>"},{"id":"text-339","type":"text","heading":"","plain_text":"Voir également\nHSTORE    &#8211; the PostgreSQL HSTORE    datatype.","html":"<p>Voir également\nHSTORE    &#8211; the PostgreSQL HSTORE    datatype.</p>"},{"id":"text-340","type":"text","heading":"","plain_text":"type","html":"<p>type</p>"},{"id":"text-341","type":"text","heading":"","plain_text":"alias of HSTORE","html":"<p>alias of HSTORE</p>"},{"id":"text-342","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.INET","html":"<p>class sqlalchemy.dialects.postgresql.INET</p>"},{"id":"text-343","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine","html":"<p>Bases: sqlalchemy.types.TypeEngine</p>"},{"id":"text-344","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)","html":"<p>class sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)</p>"},{"id":"text-345","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval\nPostgreSQL INTERVAL type.\nThe INTERVAL type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000 or zxjdbc.","html":"<p>Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval\nPostgreSQL INTERVAL type.\nThe INTERVAL type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000 or zxjdbc.</p>"},{"id":"text-346","type":"text","heading":"","plain_text":"__init__(precision=None, fields=None)","html":"<p>__init__(precision=None, fields=None)</p>"},{"id":"text-347","type":"text","heading":"","plain_text":"Construct an INTERVAL.","html":"<p>Construct an INTERVAL.</p>"},{"id":"text-348","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-349","type":"text","heading":"","plain_text":"précision – optional integer precision value","html":"<p>précision – optional integer precision value</p>"},{"id":"text-350","type":"text","heading":"","plain_text":"fields &#8211; \nstring fields specifier.  allows storage of fields\nto be limited, such as &quot;YEAR&quot;, &quot;MONTH&quot;, &quot;DAY TO HOUR&quot;,\netc.","html":"<p>fields &#8211; \nstring fields specifier.  allows storage of fields\nto be limited, such as &quot;YEAR&quot;, &quot;MONTH&quot;, &quot;DAY TO HOUR&quot;,\netc.</p>"},{"id":"text-351","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)","html":"<p>class sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)</p>"},{"id":"text-352","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.JSON\nRepresent the PostgreSQL JSON type.\nThis type is a specialization of the Core-level types.JSON\ntype.   Be sure to read the documentation for types.JSON    for\nimportant tips regarding treatment of NULL values and ORM use.\nThe operators provided by the PostgreSQL version of JSON\ninclude:","html":"<p>Bases: sqlalchemy.types.JSON\nRepresent the PostgreSQL JSON type.\nThis type is a specialization of the Core-level types.JSON\ntype.   Be sure to read the documentation for types.JSON    for\nimportant tips regarding treatment of NULL values and ORM use.\nThe operators provided by the PostgreSQL version of JSON\ninclude:</p>"},{"id":"text-353","type":"text","heading":"","plain_text":"Index operations (the -&gt;    operator):","html":"<p>Index operations (the -&gt;    operator):</p>"},{"id":"text-354","type":"text","heading":"","plain_text":"data_table.c.Les données[[[[&#39;some key&#39;]","html":"<p>data_table.c.Les données[[[[&#039;some key&#039;]</p>"},{"id":"text-355","type":"text","heading":"","plain_text":"data_table.c.Les données[[[[5]","html":"<p>data_table.c.Les données[[[[5]</p>"},{"id":"text-356","type":"text","heading":"","plain_text":"Index operations returning text (the -&gt;&gt;    operator):","html":"<p>Index operations returning text (the -&gt;&gt;    operator):</p>"},{"id":"text-357","type":"text","heading":"","plain_text":"data_table.c.Les données[[[[&#39;some key&#39;].astext == &#39;some value&#39;","html":"<p>data_table.c.Les données[[[[&#039;some key&#039;].astext == &#039;some value&#039;</p>"},{"id":"text-358","type":"text","heading":"","plain_text":"Index operations with CAST\n(equivalent to CAST(col -&gt;&gt; [&#39;some[&#39;some['some['some key&#39;] AS )):","html":"<p>Index operations with CAST\n(equivalent to CAST(col -&gt;&gt; [&#039;some[&#039;some[&#039;some[&#039;some key&#039;] AS )):</p>"},{"id":"text-359","type":"text","heading":"","plain_text":"data_table.c.Les données[[[[&#39;some key&#39;].astext.jeter(Integer) == 5","html":"<p>data_table.c.Les données[[[[&#039;some key&#039;].astext.jeter(Integer) == 5</p>"},{"id":"text-360","type":"text","heading":"","plain_text":"Path index operations (the #&gt;    operator):","html":"<p>Path index operations (the #&gt;    operator):</p>"},{"id":"text-361","type":"text","heading":"","plain_text":"data_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)]","html":"<p>data_table.c.Les données[([([([(&#039;key_1&#039;, &#039;key_2&#039;, 5, ..., &#039;key_n&#039;)]</p>"},{"id":"text-362","type":"text","heading":"","plain_text":"Path index operations returning text (the #&gt;&gt;    operator):","html":"<p>Path index operations returning text (the #&gt;&gt;    operator):</p>"},{"id":"text-363","type":"text","heading":"","plain_text":"data_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)].astext == &#39;some value&#39;","html":"<p>data_table.c.Les données[([([([(&#039;key_1&#039;, &#039;key_2&#039;, 5, ..., &#039;key_n&#039;)].astext == &#039;some value&#039;</p>"},{"id":"text-364","type":"text","heading":"","plain_text":"Changed in version 1.1: le ColumnElement.cast()    operator on\nJSON objects now requires that the JSON.Comparator.astext\nmodifier be called explicitly, if the cast works only from a textual\nstring.","html":"<p>Changed in version 1.1: le ColumnElement.cast()    operator on\nJSON objects now requires that the JSON.Comparator.astext\nmodifier be called explicitly, if the cast works only from a textual\nstring.</p>"},{"id":"text-365","type":"text","heading":"","plain_text":"Index operations return an expression object whose type defaults to\nJSON    by default, so that further JSON-oriented instructions\nmay be called upon the result type.\nCustom serializers and deserializers are specified at the dialect level,\nthat is using create_engine(). The reason for this is that when\nusing psycopg2, the DBAPI only allows serializers at the per-cursor\nor per-connection level.   E.g.:","html":"<p>Index operations return an expression object whose type defaults to\nJSON    by default, so that further JSON-oriented instructions\nmay be called upon the result type.\nCustom serializers and deserializers are specified at the dialect level,\nthat is using create_engine(). The reason for this is that when\nusing psycopg2, the DBAPI only allows serializers at the per-cursor\nor per-connection level.   E.g.:</p>"},{"id":"text-366","type":"text","heading":"","plain_text":"engine = create_engine(&quot;postgresql://scott:tiger@localhost/test&quot;,\n                        json_serializer=my_serialize_fn,\n                        json_deserializer=my_deserialize_fn\n                )","html":"<p>engine = create_engine(&quot;postgresql://scott:tiger@localhost/test&quot;,\n                        json_serializer=my_serialize_fn,\n                        json_deserializer=my_deserialize_fn\n                )</p>"},{"id":"text-367","type":"text","heading":"","plain_text":"When using the psycopg2 dialect, the json_deserializer is registered\nagainst the database using psycopg2.extras.register_default_json.","html":"<p>When using the psycopg2 dialect, the json_deserializer is registered\nagainst the database using psycopg2.extras.register_default_json.</p>"},{"id":"text-368","type":"text","heading":"","plain_text":"class Comparator(expr)","html":"<p>class Comparator(expr)</p>"},{"id":"text-369","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Comparator\nDefine comparison operations for JSON.","html":"<p>Bases: sqlalchemy.types.Comparator\nDefine comparison operations for JSON.</p>"},{"id":"text-370","type":"text","heading":"","plain_text":"property astext","html":"<p>property astext</p>"},{"id":"text-371","type":"text","heading":"","plain_text":"On an indexed expression, use the “astext” (e.g. “-&gt;&gt;”)\nconversion when rendered in SQL.\nE.g.:","html":"<p>On an indexed expression, use the “astext” (e.g. “-&gt;&gt;”)\nconversion when rendered in SQL.\nE.g.:</p>"},{"id":"text-372","type":"text","heading":"","plain_text":"sélectionner([[[[data_table.c.Les données[[[[&#39;some key&#39;].astext])","html":"<p>sélectionner([[[[data_table.c.Les données[[[[&#039;some key&#039;].astext])</p>"},{"id":"text-373","type":"text","heading":"","plain_text":"__init__(none_as_null=False, astext_type=None)","html":"<p>__init__(none_as_null=False, astext_type=None)</p>"},{"id":"text-374","type":"text","heading":"","plain_text":"Construct a JSON    type.","html":"<p>Construct a JSON    type.</p>"},{"id":"text-375","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-376","type":"text","heading":"","plain_text":"none_as_null &#8211; \nif True, persist the value None    as a\nSQL NULL value, not the JSON encoding of nul. Note that\nwhen this flag is False, the null()    construct can still\nbe used to persist a NULL value:","html":"<p>none_as_null &#8211; \nif True, persist the value None    as a\nSQL NULL value, not the JSON encoding of nul. Note that\nwhen this flag is False, the null()    construct can still\nbe used to persist a NULL value:</p>"},{"id":"text-377","type":"text","heading":"","plain_text":"de sqlalchemy importation nul\nconn.execute(table.insérer(), Les données=nul())","html":"<p>de sqlalchemy importation nul\nconn.execute(table.insérer(), Les données=nul())</p>"},{"id":"text-378","type":"text","heading":"","plain_text":"Changed in version 0.9.8: &#8211; Added none_as_null, et null()\nis now supported in order to persist a NULL value.","html":"<p>Changed in version 0.9.8: &#8211; Added none_as_null, et null()\nis now supported in order to persist a NULL value.</p>"},{"id":"text-379","type":"text","heading":"","plain_text":"astext_type &#8211; \nthe type to use for the\nJSON.Comparator.astext\naccessor on indexed attributes.  Defaults to types.Text.","html":"<p>astext_type &#8211; \nthe type to use for the\nJSON.Comparator.astext\naccessor on indexed attributes.  Defaults to types.Text.</p>"},{"id":"text-380","type":"text","heading":"","plain_text":"comparator_factory","html":"<p>comparator_factory</p>"},{"id":"text-381","type":"text","heading":"","plain_text":"alias of JSON.Comparator","html":"<p>alias of JSON.Comparator</p>"},{"id":"text-382","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)","html":"<p>class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)</p>"},{"id":"text-383","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.json.JSON\nRepresent the PostgreSQL JSONB type.\nle JSONB    type stores arbitrary JSONB format data, e.g.:","html":"<p>Bases: sqlalchemy.dialects.postgresql.json.JSON\nRepresent the PostgreSQL JSONB type.\nle JSONB    type stores arbitrary JSONB format data, e.g.:</p>"},{"id":"text-384","type":"text","heading":"","plain_text":"data_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, JSONB)\n)","html":"<p>data_table = Table(&#039;data_table&#039;, metadata,\n    Column(&#039;id&#039;, Integer, primary_key=True),\n    Column(&#039;data&#039;, JSONB)\n)</p>"},{"id":"text-385","type":"text","heading":"","plain_text":"avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )","html":"<p>avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )</p>"},{"id":"text-386","type":"text","heading":"","plain_text":"le JSONB    type includes all operations provided by\nJSON, including the same behaviors for indexing operations.\nIt also adds additional operators specific to JSONB, including\nJSONB.Comparator.has_key(), JSONB.Comparator.has_all(),\nJSONB.Comparator.has_any(), JSONB.Comparator.contains(),\net JSONB.Comparator.contained_by().\nComme le JSON    type, the JSONB    type does not detect\nin-place changes when used with the ORM, unless the\nsqlalchemy.ext.mutable    extension is used.\nCustom serializers and deserializers\nare shared with the JSON    class, using the json_serializer\net json_deserializer    keyword arguments.  These must be specified\nat the dialect level using create_engine(). When using\npsycopg2, the serializers are associated with the jsonb type using\npsycopg2.extras.register_default_jsonb    on a per-connection basis,\nin the same way that psycopg2.extras.register_default_json    is used\nto register these handlers with the json type.","html":"<p>le JSONB    type includes all operations provided by\nJSON, including the same behaviors for indexing operations.\nIt also adds additional operators specific to JSONB, including\nJSONB.Comparator.has_key(), JSONB.Comparator.has_all(),\nJSONB.Comparator.has_any(), JSONB.Comparator.contains(),\net JSONB.Comparator.contained_by().\nComme le JSON    type, the JSONB    type does not detect\nin-place changes when used with the ORM, unless the\nsqlalchemy.ext.mutable    extension is used.\nCustom serializers and deserializers\nare shared with the JSON    class, using the json_serializer\net json_deserializer    keyword arguments.  These must be specified\nat the dialect level using create_engine(). When using\npsycopg2, the serializers are associated with the jsonb type using\npsycopg2.extras.register_default_jsonb    on a per-connection basis,\nin the same way that psycopg2.extras.register_default_json    is used\nto register these handlers with the json type.</p>"},{"id":"text-387","type":"text","heading":"","plain_text":"class Comparator(expr)","html":"<p>class Comparator(expr)</p>"},{"id":"text-388","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.json.Comparator\nDefine comparison operations for JSON.","html":"<p>Bases: sqlalchemy.dialects.postgresql.json.Comparator\nDefine comparison operations for JSON.</p>"},{"id":"text-389","type":"text","heading":"","plain_text":"contained_by(other)","html":"<p>contained_by(other)</p>"},{"id":"text-390","type":"text","heading":"","plain_text":"Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.","html":"<p>Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression.</p>"},{"id":"text-391","type":"text","heading":"","plain_text":"contient(other, **kwargs)","html":"<p>contient(other, **kwargs)</p>"},{"id":"text-392","type":"text","heading":"","plain_text":"Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.","html":"<p>Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression.</p>"},{"id":"text-393","type":"text","heading":"","plain_text":"has_all(other)","html":"<p>has_all(other)</p>"},{"id":"text-394","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of all keys in jsonb","html":"<p>Boolean expression.  Test for presence of all keys in jsonb</p>"},{"id":"text-395","type":"text","heading":"","plain_text":"has_any(other)","html":"<p>has_any(other)</p>"},{"id":"text-396","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of any key in jsonb","html":"<p>Boolean expression.  Test for presence of any key in jsonb</p>"},{"id":"text-397","type":"text","heading":"","plain_text":"has_key(other)","html":"<p>has_key(other)</p>"},{"id":"text-398","type":"text","heading":"","plain_text":"Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.","html":"<p>Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression.</p>"},{"id":"text-399","type":"text","heading":"","plain_text":"comparator_factory","html":"<p>comparator_factory</p>"},{"id":"text-400","type":"text","heading":"","plain_text":"alias of JSONB.Comparator","html":"<p>alias of JSONB.Comparator</p>"},{"id":"text-401","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.MACADDR","html":"<p>class sqlalchemy.dialects.postgresql.MACADDR</p>"},{"id":"text-402","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine","html":"<p>Bases: sqlalchemy.types.TypeEngine</p>"},{"id":"text-403","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.ARGENT","html":"<p>class sqlalchemy.dialects.postgresql.ARGENT</p>"},{"id":"text-404","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL MONEY type.","html":"<p>Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL MONEY type.</p>"},{"id":"text-405","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.OID","html":"<p>class sqlalchemy.dialects.postgresql.OID</p>"},{"id":"text-406","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL OID type.","html":"<p>Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL OID type.</p>"},{"id":"text-407","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)","html":"<p>class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)</p>"},{"id":"text-408","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Float\nThe SQL REAL type.","html":"<p>Bases: sqlalchemy.types.Float\nThe SQL REAL type.</p>"},{"id":"text-409","type":"text","heading":"","plain_text":"__init__(precision=None, asdecimal=False, decimal_return_scale=None)","html":"<p>__init__(precision=None, asdecimal=False, decimal_return_scale=None)</p>"},{"id":"text-410","type":"text","heading":"","plain_text":"Construct a Float.","html":"<p>Construct a Float.</p>"},{"id":"text-411","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-412","type":"text","heading":"","plain_text":"précision – the numeric precision for use in DDL CREATE\nTABLE.","html":"<p>précision – the numeric precision for use in DDL CREATE\nTABLE.</p>"},{"id":"text-413","type":"text","heading":"","plain_text":"asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.","html":"<p>asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion.</p>"},{"id":"text-414","type":"text","heading":"","plain_text":"decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.","html":"<p>decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified.</p>"},{"id":"text-415","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.REGCLASS","html":"<p>class sqlalchemy.dialects.postgresql.REGCLASS</p>"},{"id":"text-416","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL REGCLASS type.","html":"<p>Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL REGCLASS type.</p>"},{"id":"text-417","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.TSVECTOR","html":"<p>class sqlalchemy.dialects.postgresql.TSVECTOR</p>"},{"id":"text-418","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine\nle postgresql.TSVECTOR    type implements the PostgreSQL\ntext search type TSVECTOR.\nIt can be used to do full text queries on natural language\ndocuments.","html":"<p>Bases: sqlalchemy.types.TypeEngine\nle postgresql.TSVECTOR    type implements the PostgreSQL\ntext search type TSVECTOR.\nIt can be used to do full text queries on natural language\ndocuments.</p>"},{"id":"text-419","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.UUID(as_uuid=False)","html":"<p>class sqlalchemy.dialects.postgresql.UUID(as_uuid=False)</p>"},{"id":"text-420","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.TypeEngine\nPostgreSQL UUID type.\nRepresents the UUID column type, interpreting\ndata either as natively returned by the DBAPI\nor as Python uuid objects.\nThe UUID type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000.","html":"<p>Bases: sqlalchemy.types.TypeEngine\nPostgreSQL UUID type.\nRepresents the UUID column type, interpreting\ndata either as natively returned by the DBAPI\nor as Python uuid objects.\nThe UUID type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000.</p>"},{"id":"text-421","type":"text","heading":"","plain_text":"__init__(as_uuid=False)","html":"<p>__init__(as_uuid=False)</p>"},{"id":"text-422","type":"text","heading":"","plain_text":"Construct a UUID type.","html":"<p>Construct a UUID type.</p>"},{"id":"text-423","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-424","type":"text","heading":"","plain_text":"as_uuid=False – if True, values will be interpreted\nas Python uuid objects, converting to/from string via the\nDBAPI.","html":"<p>as_uuid=False – if True, values will be interpreted\nas Python uuid objects, converting to/from string via the\nDBAPI.</p>"},{"id":"text-425","type":"text","heading":"","plain_text":"Range Types\nThe new range column types found in PostgreSQL 9.2 onwards are\ncatered for by the following types:","html":"<p>Range Types\nThe new range column types found in PostgreSQL 9.2 onwards are\ncatered for by the following types:</p>"},{"id":"text-426","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.INT4RANGE","html":"<p>class sqlalchemy.dialects.postgresql.INT4RANGE</p>"},{"id":"text-427","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT4RANGE type.","html":"<p>Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT4RANGE type.</p>"},{"id":"text-428","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.INT8RANGE","html":"<p>class sqlalchemy.dialects.postgresql.INT8RANGE</p>"},{"id":"text-429","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT8RANGE type.","html":"<p>Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT8RANGE type.</p>"},{"id":"text-430","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.NUMRANGE","html":"<p>class sqlalchemy.dialects.postgresql.NUMRANGE</p>"},{"id":"text-431","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL NUMRANGE type.","html":"<p>Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL NUMRANGE type.</p>"},{"id":"text-432","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.DATERANGE","html":"<p>class sqlalchemy.dialects.postgresql.DATERANGE</p>"},{"id":"text-433","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL DATERANGE type.","html":"<p>Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL DATERANGE type.</p>"},{"id":"text-434","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.TSRANGE","html":"<p>class sqlalchemy.dialects.postgresql.TSRANGE</p>"},{"id":"text-435","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSRANGE type.","html":"<p>Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSRANGE type.</p>"},{"id":"text-436","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.TSTZRANGE","html":"<p>class sqlalchemy.dialects.postgresql.TSTZRANGE</p>"},{"id":"text-437","type":"text","heading":"","plain_text":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSTZRANGE type.","html":"<p>Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSTZRANGE type.</p>"},{"id":"text-438","type":"text","heading":"","plain_text":"The types above get most of their functionality from the following\nmixin:","html":"<p>The types above get most of their functionality from the following\nmixin:</p>"},{"id":"text-439","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.ranges.RangeOperators","html":"<p>class sqlalchemy.dialects.postgresql.ranges.RangeOperators</p>"},{"id":"text-440","type":"text","heading":"","plain_text":"This mixin provides functionality for the Range Operators\nlisted in Table 9-44 of the postgres documentation for Range\nFunctions and Operators. It is used by all the range types\nprovided in the postgres    dialect and can likely be used for\nany range types you create yourself.\nNo extra support is provided for the Range Functions listed in\nTable 9-45 of the postgres documentation. For these, the normal\nfunc()    object should be used.","html":"<p>This mixin provides functionality for the Range Operators\nlisted in Table 9-44 of the postgres documentation for Range\nFunctions and Operators. It is used by all the range types\nprovided in the postgres    dialect and can likely be used for\nany range types you create yourself.\nNo extra support is provided for the Range Functions listed in\nTable 9-45 of the postgres documentation. For these, the normal\nfunc()    object should be used.</p>"},{"id":"text-441","type":"text","heading":"","plain_text":"class comparator_factory(expr)","html":"<p>class comparator_factory(expr)</p>"},{"id":"text-442","type":"text","heading":"","plain_text":"Bases: sqlalchemy.types.Comparator\nDefine comparison operations for range types.","html":"<p>Bases: sqlalchemy.types.Comparator\nDefine comparison operations for range types.</p>"},{"id":"text-443","type":"text","heading":"","plain_text":"__ne__(other)","html":"<p>__ne__(other)</p>"},{"id":"text-444","type":"text","heading":"","plain_text":"Boolean expression. Returns true if two ranges are not equal","html":"<p>Boolean expression. Returns true if two ranges are not equal</p>"},{"id":"text-445","type":"text","heading":"","plain_text":"adjacent_to(other)","html":"<p>adjacent_to(other)</p>"},{"id":"text-446","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the range in the column\nis adjacent to the range in the operand.","html":"<p>Boolean expression. Returns true if the range in the column\nis adjacent to the range in the operand.</p>"},{"id":"text-447","type":"text","heading":"","plain_text":"contained_by(other)","html":"<p>contained_by(other)</p>"},{"id":"text-448","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the column is contained\nwithin the right hand operand.","html":"<p>Boolean expression. Returns true if the column is contained\nwithin the right hand operand.</p>"},{"id":"text-449","type":"text","heading":"","plain_text":"contient(other, **kw)","html":"<p>contient(other, **kw)</p>"},{"id":"text-450","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the right hand operand,\nwhich can be an element or a range, is contained within the\ncolumn.","html":"<p>Boolean expression. Returns true if the right hand operand,\nwhich can be an element or a range, is contained within the\ncolumn.</p>"},{"id":"text-451","type":"text","heading":"","plain_text":"not_extend_left_of(other)","html":"<p>not_extend_left_of(other)</p>"},{"id":"text-452","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the range in the column\ndoes not extend left of the range in the operand.","html":"<p>Boolean expression. Returns true if the range in the column\ndoes not extend left of the range in the operand.</p>"},{"id":"text-453","type":"text","heading":"","plain_text":"not_extend_right_of(other)","html":"<p>not_extend_right_of(other)</p>"},{"id":"text-454","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the range in the column\ndoes not extend right of the range in the operand.","html":"<p>Boolean expression. Returns true if the range in the column\ndoes not extend right of the range in the operand.</p>"},{"id":"text-455","type":"text","heading":"","plain_text":"overlaps(other)","html":"<p>overlaps(other)</p>"},{"id":"text-456","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the column overlaps\n(has points in common with) the right hand operand.","html":"<p>Boolean expression. Returns true if the column overlaps\n(has points in common with) the right hand operand.</p>"},{"id":"text-457","type":"text","heading":"","plain_text":"strictly_left_of(other)","html":"<p>strictly_left_of(other)</p>"},{"id":"text-458","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the column is strictly\nleft of the right hand operand.","html":"<p>Boolean expression. Returns true if the column is strictly\nleft of the right hand operand.</p>"},{"id":"text-459","type":"text","heading":"","plain_text":"strictly_right_of(other)","html":"<p>strictly_right_of(other)</p>"},{"id":"text-460","type":"text","heading":"","plain_text":"Boolean expression. Returns true if the column is strictly\nright of the right hand operand.","html":"<p>Boolean expression. Returns true if the column is strictly\nright of the right hand operand.</p>"},{"id":"text-461","type":"text","heading":"","plain_text":"Attention\nThe range type DDL support should work with any PostgreSQL DBAPI\ndriver, however the data types returned may vary. If you are using\npsycopg2, it’s recommended to upgrade to version 2.5 or later\nbefore using these column types.","html":"<p>Attention\nThe range type DDL support should work with any PostgreSQL DBAPI\ndriver, however the data types returned may vary. If you are using\npsycopg2, it’s recommended to upgrade to version 2.5 or later\nbefore using these column types.</p>"},{"id":"text-462","type":"text","heading":"","plain_text":"When instantiating models that use these column types, you should pass\nwhatever data type is expected by the DBAPI driver you’re using for\nthe column type. Pour psycopg2    these are\npsycopg2.extras.NumericRange,\npsycopg2.extras.DateRange,\npsycopg2.extras.DateTimeRange    et\npsycopg2.extras.DateTimeTZRange    or the class you’ve\nregistered with psycopg2.extras.register_range.\nPar exemple:","html":"<p>When instantiating models that use these column types, you should pass\nwhatever data type is expected by the DBAPI driver you’re using for\nthe column type. Pour psycopg2    these are\npsycopg2.extras.NumericRange,\npsycopg2.extras.DateRange,\npsycopg2.extras.DateTimeRange    et\npsycopg2.extras.DateTimeTZRange    or the class you’ve\nregistered with psycopg2.extras.register_range.\nPar exemple:</p>"},{"id":"text-463","type":"text","heading":"","plain_text":"de psycopg2.extras importation DateTimeRange\nde sqlalchemy.dialects.postgresql importation TSRANGE","html":"<p>de psycopg2.extras importation DateTimeRange\nde sqlalchemy.dialects.postgresql importation TSRANGE</p>"},{"id":"text-464","type":"text","heading":"","plain_text":"class RoomBooking(Base):","html":"<p>class RoomBooking(Base):</p>"},{"id":"text-465","type":"text","heading":"","plain_text":"__tablename__ = &#39;room_booking&#39;","html":"<p>__tablename__ = &#039;room_booking&#039;</p>"},{"id":"text-466","type":"text","heading":"","plain_text":"room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())","html":"<p>room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())</p>"},{"id":"text-467","type":"text","heading":"","plain_text":"booking = RoomBooking(\n    room=101,\n    pendant=DateTimeRange(datetime(2013, 3, 23), None)\n)","html":"<p>booking = RoomBooking(\n    room=101,\n    pendant=DateTimeRange(datetime(2013, 3, 23), None)\n)</p>"},{"id":"text-468","type":"text","heading":"","plain_text":"PostgreSQL Constraint Types\nSQLAlchemy supports PostgreSQL EXCLUDE constraints via the\nExcludeConstraint    class:","html":"<p>PostgreSQL Constraint Types\nSQLAlchemy supports PostgreSQL EXCLUDE constraints via the\nExcludeConstraint    class:</p>"},{"id":"text-469","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)","html":"<p>class sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)</p>"},{"id":"text-470","type":"text","heading":"","plain_text":"Bases: sqlalchemy.schema.ColumnCollectionConstraint\nA table-level EXCLUDE constraint.\nDefines an EXCLUDE constraint as described in the postgres\ndocumentation.","html":"<p>Bases: sqlalchemy.schema.ColumnCollectionConstraint\nA table-level EXCLUDE constraint.\nDefines an EXCLUDE constraint as described in the postgres\ndocumentation.</p>"},{"id":"text-471","type":"text","heading":"","plain_text":"__init__(*elements, **kw)","html":"<p>__init__(*elements, **kw)</p>"},{"id":"text-472","type":"text","heading":"","plain_text":"Créé un ExcludeConstraint    object.\nE.g.:","html":"<p>Créé un ExcludeConstraint    object.\nE.g.:</p>"},{"id":"text-473","type":"text","heading":"","plain_text":"const = ExcludeConstraint(\n    (Column(&#39;period&#39;), &#39;&amp;&amp;&#39;),\n    (Column(&#39;group&#39;), &#39;=&#39;),\n    where=(Column(&#39;group&#39;) != &#39;some group&#39;)\n)","html":"<p>const = ExcludeConstraint(\n    (Column(&#039;period&#039;), &#039;&amp;&amp;&#039;),\n    (Column(&#039;group&#039;), &#039;=&#039;),\n    where=(Column(&#039;group&#039;) != &#039;some group&#039;)\n)</p>"},{"id":"text-474","type":"text","heading":"","plain_text":"The constraint is normally embedded into the Table    construct\ndirectly, or added later using append_constraint():","html":"<p>The constraint is normally embedded into the Table    construct\ndirectly, or added later using append_constraint():</p>"},{"id":"text-475","type":"text","heading":"","plain_text":"some_table = Table(\n    &#39;some_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;period&#39;, TSRANGE()),\n    Column(&#39;group&#39;, Chaîne)\n)","html":"<p>some_table = Table(\n    &#039;some_table&#039;, metadata,\n    Column(&#039;id&#039;, Integer, primary_key=True),\n    Column(&#039;period&#039;, TSRANGE()),\n    Column(&#039;group&#039;, Chaîne)\n)</p>"},{"id":"text-476","type":"text","heading":"","plain_text":"some_table.append_constraint(\n    ExcludeConstraint(\n        (some_table.c.period, &#39;&amp;&amp;&#39;),\n        (some_table.c.group, &#39;=&#39;),\n        where=some_table.c.group != &#39;some group&#39;,\n        name=&#39;some_table_excl_const&#39;\n    )\n)","html":"<p>some_table.append_constraint(\n    ExcludeConstraint(\n        (some_table.c.period, &#039;&amp;&amp;&#039;),\n        (some_table.c.group, &#039;=&#039;),\n        where=some_table.c.group != &#039;some group&#039;,\n        name=&#039;some_table_excl_const&#039;\n    )\n)</p>"},{"id":"text-477","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-478","type":"text","heading":"","plain_text":"*elements – A sequence of two tuples of the form (column, operator)    where\n“column” is a SQL expression element or a raw SQL string, most\ntypically a Column    object, and “operator” is a string\ncontaining the operator to use.   In order to specify a column name\nwhen a  Column    object is not available, while ensuring\nthat any necessary quoting rules take effect, an ad-hoc\nColumn    ou sql.expression.column()    object should be\nused.","html":"<p>*elements – A sequence of two tuples of the form (column, operator)    where\n“column” is a SQL expression element or a raw SQL string, most\ntypically a Column    object, and “operator” is a string\ncontaining the operator to use.   In order to specify a column name\nwhen a  Column    object is not available, while ensuring\nthat any necessary quoting rules take effect, an ad-hoc\nColumn    ou sql.expression.column()    object should be\nused.</p>"},{"id":"text-479","type":"text","heading":"","plain_text":"name – Optional, the in-database name of this constraint.","html":"<p>name – Optional, the in-database name of this constraint.</p>"},{"id":"text-480","type":"text","heading":"","plain_text":"deferrable – Optional bool.  If set, emit DEFERRABLE or NOT DEFERRABLE when\nissuing DDL for this constraint.","html":"<p>deferrable – Optional bool.  If set, emit DEFERRABLE or NOT DEFERRABLE when\nissuing DDL for this constraint.</p>"},{"id":"text-481","type":"text","heading":"","plain_text":"initialement – Optional string.  If set, emit INITIALLY  when issuing DDL\nfor this constraint.","html":"<p>initialement – Optional string.  If set, emit INITIALLY  when issuing DDL\nfor this constraint.</p>"},{"id":"text-482","type":"text","heading":"","plain_text":"using – Optional string.  If set, emit USING  when issuing DDL\nfor this constraint. Defaults to ‘gist’.","html":"<p>using – Optional string.  If set, emit USING  when issuing DDL\nfor this constraint. Defaults to ‘gist’.</p>"},{"id":"text-483","type":"text","heading":"","plain_text":"where &#8211; \nOptional SQL expression construct or literal SQL string.\nIf set, emit WHERE \n when issuing DDL\nfor this constraint.","html":"<p>where &#8211; \nOptional SQL expression construct or literal SQL string.\nIf set, emit WHERE \n when issuing DDL\nfor this constraint.</p>"},{"id":"text-484","type":"text","heading":"","plain_text":"Attention\nle ExcludeConstraint.where    argument to ExcludeConstraint    can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given.  DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.","html":"<p>Attention\nle ExcludeConstraint.where    argument to ExcludeConstraint    can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given.  DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.</p>"},{"id":"text-485","type":"text","heading":"","plain_text":"Par exemple:\nde sqlalchemy.dialects.postgresql importation ExcludeConstraint, TSRANGE","html":"<p>Par exemple:\nde sqlalchemy.dialects.postgresql importation ExcludeConstraint, TSRANGE</p>"},{"id":"text-486","type":"text","heading":"","plain_text":"class RoomBooking(Base):","html":"<p>class RoomBooking(Base):</p>"},{"id":"text-487","type":"text","heading":"","plain_text":"__tablename__ = &#39;room_booking&#39;","html":"<p>__tablename__ = &#039;room_booking&#039;</p>"},{"id":"text-488","type":"text","heading":"","plain_text":"room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())","html":"<p>room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())</p>"},{"id":"text-489","type":"text","heading":"","plain_text":"__table_args__ = (\n        ExcludeConstraint((&#39;room&#39;, &#39;=&#39;), (&#39;during&#39;, &#39;&amp;&amp;&#39;)),\n    )","html":"<p>__table_args__ = (\n        ExcludeConstraint((&#039;room&#039;, &#039;=&#039;), (&#039;during&#039;, &#039;&amp;&amp;&#039;)),\n    )</p>"},{"id":"text-490","type":"text","heading":"","plain_text":"PostgreSQL DML Constructs","html":"<p>PostgreSQL DML Constructs</p>"},{"id":"text-491","type":"text","heading":"","plain_text":"sqlalchemy.dialects.postgresql.dml.insérer(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)","html":"<p>sqlalchemy.dialects.postgresql.dml.insérer(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)</p>"},{"id":"text-492","type":"text","heading":"","plain_text":"Construct a new Insert    object.\nThis constructor is mirrored as a public API function; voir insert()    for a full usage and argument description.","html":"<p>Construct a new Insert    object.\nThis constructor is mirrored as a public API function; voir insert()    for a full usage and argument description.</p>"},{"id":"text-493","type":"text","heading":"","plain_text":"class sqlalchemy.dialects.postgresql.dml.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)","html":"<p>class sqlalchemy.dialects.postgresql.dml.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)</p>"},{"id":"text-494","type":"text","heading":"","plain_text":"Bases: sqlalchemy.sql.expression.Insert\nPostgreSQL-specific implementation of INSERT.\nAdds methods for PG-specific syntaxes such as ON CONFLICT.","html":"<p>Bases: sqlalchemy.sql.expression.Insert\nPostgreSQL-specific implementation of INSERT.\nAdds methods for PG-specific syntaxes such as ON CONFLICT.</p>"},{"id":"text-495","type":"text","heading":"","plain_text":"excluded","html":"<p>excluded</p>"},{"id":"text-496","type":"text","heading":"","plain_text":"Provide the excluded    namespace for an ON CONFLICT statement\nPG’s ON CONFLICT clause allows reference to the row that would\nbe inserted, known as excluded. This attribute provides\nall columns in this row to be referenceable.","html":"<p>Provide the excluded    namespace for an ON CONFLICT statement\nPG’s ON CONFLICT clause allows reference to the row that would\nbe inserted, known as excluded. This attribute provides\nall columns in this row to be referenceable.</p>"},{"id":"text-497","type":"text","heading":"","plain_text":"on_conflict_do_nothing(constraint=None, index_elements=None, index_where=None)","html":"<p>on_conflict_do_nothing(constraint=None, index_elements=None, index_where=None)</p>"},{"id":"text-498","type":"text","heading":"","plain_text":"Specifies a DO NOTHING action for ON CONFLICT clause.\nle constraint    et index_elements    arguments\nare optional, but only one of these can be specified.","html":"<p>Specifies a DO NOTHING action for ON CONFLICT clause.\nle constraint    et index_elements    arguments\nare optional, but only one of these can be specified.</p>"},{"id":"text-499","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-500","type":"text","heading":"","plain_text":"constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.","html":"<p>constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.</p>"},{"id":"text-501","type":"text","heading":"","plain_text":"index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.","html":"<p>index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.</p>"},{"id":"text-502","type":"text","heading":"","plain_text":"index_where &#8211; \nAdditional WHERE criterion that can be used to infer a\nconditional target index.","html":"<p>index_where &#8211; \nAdditional WHERE criterion that can be used to infer a\nconditional target index.</p>"},{"id":"text-503","type":"text","heading":"","plain_text":"on_conflict_do_update(constraint=None, index_elements=None, index_where=None, set_=None, where=None)","html":"<p>on_conflict_do_update(constraint=None, index_elements=None, index_where=None, set_=None, where=None)</p>"},{"id":"text-504","type":"text","heading":"","plain_text":"Specifies a DO UPDATE SET action for ON CONFLICT clause.\nEither the constraint    ou index_elements    argument is\nrequired, but only one of these can be specified.","html":"<p>Specifies a DO UPDATE SET action for ON CONFLICT clause.\nEither the constraint    ou index_elements    argument is\nrequired, but only one of these can be specified.</p>"},{"id":"text-505","type":"text","heading":"","plain_text":"Paramètres","html":"<p>Paramètres</p>"},{"id":"text-506","type":"text","heading":"","plain_text":"constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.","html":"<p>constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute.</p>"},{"id":"text-507","type":"text","heading":"","plain_text":"index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.","html":"<p>index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index.</p>"},{"id":"text-508","type":"text","heading":"","plain_text":"index_where – Additional WHERE criterion that can be used to infer a\nconditional target index.","html":"<p>index_where – Additional WHERE criterion that can be used to infer a\nconditional target index.</p>"},{"id":"text-509","type":"text","heading":"","plain_text":"set_ &#8211; \nRequired argument. A dictionary or other mapping object\nwith column names as keys and expressions or literals as values,\nspecifying the SET    actions to take.\nIf the target Column    specifies a “.key” attribute distinct\nfrom the column name, that key should be used.","html":"<p>set_ &#8211; \nRequired argument. A dictionary or other mapping object\nwith column names as keys and expressions or literals as values,\nspecifying the SET    actions to take.\nIf the target Column    specifies a “.key” attribute distinct\nfrom the column name, that key should be used.</p>"},{"id":"text-510","type":"text","heading":"","plain_text":"Attention\nThis dictionary does ne pas take into account\nPython-specified default UPDATE values or generation functions,\ne.g. those specified using Column.onupdate.\nThese values will not be exercised for an ON CONFLICT style of\nUPDATE, unless they are manually specified in the\nInsert.on_conflict_do_update.set_    dictionary.","html":"<p>Attention\nThis dictionary does ne pas take into account\nPython-specified default UPDATE values or generation functions,\ne.g. those specified using Column.onupdate.\nThese values will not be exercised for an ON CONFLICT style of\nUPDATE, unless they are manually specified in the\nInsert.on_conflict_do_update.set_    dictionary.</p>"},{"id":"text-511","type":"text","heading":"","plain_text":"where &#8211; \nOptional argument. If present, can be a literal SQL\nstring or an acceptable expression for a WHERE    clause\nthat restricts the rows affected by DO UPDATE SET. Rows\nnot meeting the WHERE    condition will not be updated\n(effectively a DO NOTHING    for those rows).","html":"<p>where &#8211; \nOptional argument. If present, can be a literal SQL\nstring or an acceptable expression for a WHERE    clause\nthat restricts the rows affected by DO UPDATE SET. Rows\nnot meeting the WHERE    condition will not be updated\n(effectively a DO NOTHING    for those rows).</p>"},{"id":"text-512","type":"text","heading":"","plain_text":"psycopg2\nSupport for the PostgreSQL database via the psycopg2 driver.","html":"<p>psycopg2\nSupport for the PostgreSQL database via the psycopg2 driver.</p>"},{"id":"text-513","type":"text","heading":"","plain_text":"DBAPI\nDocumentation and download information (if applicable) for psycopg2 is available at:\nhttp://pypi.python.org/pypi/psycopg2/","html":"<p>DBAPI\nDocumentation and download information (if applicable) for psycopg2 is available at:\nhttp://pypi.python.org/pypi/psycopg2/</p>"},{"id":"text-514","type":"text","heading":"","plain_text":"Connecting\nConnect String:","html":"<p>Connecting\nConnect String:</p>"},{"id":"text-515","type":"text","heading":"","plain_text":"postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]","html":"<p>postgresql+psycopg2://user:password@host:port/dbname[?key=value&amp;key=value...]</p>"},{"id":"text-516","type":"text","heading":"","plain_text":"psycopg2 Connect Arguments\npsycopg2-specific keyword arguments which are accepted by\ncreate_engine()    sont:","html":"<p>psycopg2 Connect Arguments\npsycopg2-specific keyword arguments which are accepted by\ncreate_engine()    sont:</p>"},{"id":"text-517","type":"text","heading":"","plain_text":"server_side_cursors: Enable the usage of “server side cursors” for SQL\nstatements which support this feature. What this essentially means from a\npsycopg2 point of view is that the cursor is created using a name, e.g.\nconnection.cursor(&#39;some name&#39;), which has the effect that result rows\nare not immediately pre-fetched and buffered after statement execution, but\nare instead left on the server and only retrieved as needed. SQLAlchemy’s\nResultProxy    uses special row-buffering\nbehavior when this feature is enabled, such that groups of 100 rows at a\ntime are fetched over the wire to reduce conversational overhead.\nNote that the Connection.execution_options.stream_results\nexecution option is a more targeted\nway of enabling this mode on a per-execution basis.","html":"<p>server_side_cursors: Enable the usage of “server side cursors” for SQL\nstatements which support this feature. What this essentially means from a\npsycopg2 point of view is that the cursor is created using a name, e.g.\nconnection.cursor(&#039;some name&#039;), which has the effect that result rows\nare not immediately pre-fetched and buffered after statement execution, but\nare instead left on the server and only retrieved as needed. SQLAlchemy’s\nResultProxy    uses special row-buffering\nbehavior when this feature is enabled, such that groups of 100 rows at a\ntime are fetched over the wire to reduce conversational overhead.\nNote that the Connection.execution_options.stream_results\nexecution option is a more targeted\nway of enabling this mode on a per-execution basis.</p>"},{"id":"text-518","type":"text","heading":"","plain_text":"use_native_unicode: Enable the usage of Psycopg2 “native unicode” mode\nper connection.  True by default.","html":"<p>use_native_unicode: Enable the usage of Psycopg2 “native unicode” mode\nper connection.  True by default.</p>"},{"id":"text-519","type":"text","heading":"","plain_text":"isolation_level: This option, available for all PostgreSQL dialects,\ncomprend le AUTOCOMMIT    isolation level when using the psycopg2\ndialect.","html":"<p>isolation_level: This option, available for all PostgreSQL dialects,\ncomprend le AUTOCOMMIT    isolation level when using the psycopg2\ndialect.</p>"},{"id":"text-520","type":"text","heading":"","plain_text":"client_encoding: sets the client encoding in a libpq-agnostic way,\nusing psycopg2’s set_client_encoding()    method.","html":"<p>client_encoding: sets the client encoding in a libpq-agnostic way,\nusing psycopg2’s set_client_encoding()    method.</p>"},{"id":"text-521","type":"text","heading":"","plain_text":"executemany_mode, executemany_batch_page_size,\nexecutemany_values_page_size: Allows use of psycopg2\nextensions for optimizing “executemany”-stye queries.  See the referenced\nsection below for details.","html":"<p>executemany_mode, executemany_batch_page_size,\nexecutemany_values_page_size: Allows use of psycopg2\nextensions for optimizing “executemany”-stye queries.  See the referenced\nsection below for details.</p>"},{"id":"text-522","type":"text","heading":"","plain_text":"use_batch_mode: this is the previous setting used to affect “executemany”\nmode and is now deprecated.","html":"<p>use_batch_mode: this is the previous setting used to affect “executemany”\nmode and is now deprecated.</p>"},{"id":"text-523","type":"text","heading":"","plain_text":"Unix Domain Connections\npsycopg2 supports connecting via Unix domain connections.   When the hôte\nportion of the URL is omitted, SQLAlchemy passes None    to psycopg2,\nwhich specifies Unix-domain communication rather than TCP/IP communication:","html":"<p>Unix Domain Connections\npsycopg2 supports connecting via Unix domain connections.   When the hôte\nportion of the URL is omitted, SQLAlchemy passes None    to psycopg2,\nwhich specifies Unix-domain communication rather than TCP/IP communication:</p>"},{"id":"text-524","type":"text","heading":"","plain_text":"create_engine(&quot;postgresql+psycopg2://user:password@/dbname&quot;)","html":"<p>create_engine(&quot;postgresql+psycopg2://user:password@/dbname&quot;)</p>"},{"id":"text-525","type":"text","heading":"","plain_text":"By default, the socket file used is to connect to a Unix-domain socket\ndans /tmp, or whatever socket directory was specified when PostgreSQL\nwas built.  This value can be overridden by passing a pathname to psycopg2,\nusing hôte    as an additional keyword argument:","html":"<p>By default, the socket file used is to connect to a Unix-domain socket\ndans /tmp, or whatever socket directory was specified when PostgreSQL\nwas built.  This value can be overridden by passing a pathname to psycopg2,\nusing hôte    as an additional keyword argument:</p>"},{"id":"text-526","type":"text","heading":"","plain_text":"create_engine(&quot;postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql&quot;)","html":"<p>create_engine(&quot;postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql&quot;)</p>"},{"id":"text-527","type":"text","heading":"","plain_text":"Empty DSN Connections / Environment Variable Connections\nThe psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the\nlibpq client library, which by default indicates to connect to a localhost\nPostgreSQL database that is open for “trust” connections.  This behavior can be\nfurther tailored using a particular set of environment variables which are\nprefixed with PG_..., which are  consumed by libpq    to take the place of\nany or all elements of the connection string.\nFor this form, the URL can be passed without any elements other than the\ninitial scheme:","html":"<p>Empty DSN Connections / Environment Variable Connections\nThe psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the\nlibpq client library, which by default indicates to connect to a localhost\nPostgreSQL database that is open for “trust” connections.  This behavior can be\nfurther tailored using a particular set of environment variables which are\nprefixed with PG_..., which are  consumed by libpq    to take the place of\nany or all elements of the connection string.\nFor this form, the URL can be passed without any elements other than the\ninitial scheme:</p>"},{"id":"text-528","type":"text","heading":"","plain_text":"engine = create_engine(&#39;postgresql+psycopg2://&#39;)","html":"<p>engine = create_engine(&#039;postgresql+psycopg2://&#039;)</p>"},{"id":"text-529","type":"text","heading":"","plain_text":"In the above form, a blank “dsn” string is passed to the psycopg2.connect()\nfunction which in turn represents an empty DSN passed to libpq.","html":"<p>In the above form, a blank “dsn” string is passed to the psycopg2.connect()\nfunction which in turn represents an empty DSN passed to libpq.</p>"},{"id":"text-530","type":"text","heading":"","plain_text":"New in version 1.3.2: support for parameter-less connections with psycopg2.","html":"<p>New in version 1.3.2: support for parameter-less connections with psycopg2.</p>"},{"id":"text-531","type":"text","heading":"","plain_text":"Voir également\nEnvironment Variables &#8211;\nPostgreSQL documentation on how to use PG_...\nenvironment variables for connections.","html":"<p>Voir également\nEnvironment Variables &#8211;\nPostgreSQL documentation on how to use PG_...\nenvironment variables for connections.</p>"},{"id":"text-532","type":"text","heading":"","plain_text":"Per-Statement/Connection Execution Options\nThe following DBAPI-specific options are respected when used with\nConnection.execution_options(), Executable.execution_options(),\nQuery.execution_options(), in addition to those not specific to DBAPIs:","html":"<p>Per-Statement/Connection Execution Options\nThe following DBAPI-specific options are respected when used with\nConnection.execution_options(), Executable.execution_options(),\nQuery.execution_options(), in addition to those not specific to DBAPIs:</p>"},{"id":"text-533","type":"text","heading":"","plain_text":"isolation_level    &#8211; Set the transaction isolation level for the lifespan\nd&#39;un Connection    (can only be set on a connection, not a statement\nor query).   See Psycopg2 Transaction Isolation Level.","html":"<p>isolation_level    &#8211; Set the transaction isolation level for the lifespan\nd&#039;un Connection    (can only be set on a connection, not a statement\nor query).   See Psycopg2 Transaction Isolation Level.</p>"},{"id":"text-534","type":"text","heading":"","plain_text":"stream_results    &#8211; Enable or disable usage of psycopg2 server side\ncursors &#8211; this feature makes use of “named” cursors in combination with\nspecial result handling methods so that result rows are not fully buffered.\nSi None    or not set, the server_side_cursors    option of the\nMoteur    est utilisé.","html":"<p>stream_results    &#8211; Enable or disable usage of psycopg2 server side\ncursors &#8211; this feature makes use of “named” cursors in combination with\nspecial result handling methods so that result rows are not fully buffered.\nSi None    or not set, the server_side_cursors    option of the\nMoteur    est utilisé.</p>"},{"id":"text-535","type":"text","heading":"","plain_text":"max_row_buffer    &#8211; when using stream_results, an integer value that\nspecifies the maximum number of rows to buffer at a time.  This is\ninterpreted by the BufferedRowResultProxy, and if omitted the\nbuffer will grow to ultimately store 1000 rows at a time.","html":"<p>max_row_buffer    &#8211; when using stream_results, an integer value that\nspecifies the maximum number of rows to buffer at a time.  This is\ninterpreted by the BufferedRowResultProxy, and if omitted the\nbuffer will grow to ultimately store 1000 rows at a time.</p>"},{"id":"text-536","type":"text","heading":"","plain_text":"Psycopg2 Fast Execution Helpers\nModern versions of psycopg2 include a feature known as\nFast Execution Helpers , which\nhave been shown in benchmarking to improve psycopg2’s executemany()\nperformance, primarily with INSERT statements, by multiple orders of magnitude.\nSQLAlchemy allows this extension to be used for all executemany()    style\ncalls invoked by an Moteur    when used with multiple parameter\nensembles, which includes the use of this feature both by the\nCore as well as by the ORM for inserts of objects with non-autogenerated\nprimary key values, by adding the executemany_mode    flag to\ncreate_engine():","html":"<p>Psycopg2 Fast Execution Helpers\nModern versions of psycopg2 include a feature known as\nFast Execution Helpers , which\nhave been shown in benchmarking to improve psycopg2’s executemany()\nperformance, primarily with INSERT statements, by multiple orders of magnitude.\nSQLAlchemy allows this extension to be used for all executemany()    style\ncalls invoked by an Moteur    when used with multiple parameter\nensembles, which includes the use of this feature both by the\nCore as well as by the ORM for inserts of objects with non-autogenerated\nprimary key values, by adding the executemany_mode    flag to\ncreate_engine():</p>"},{"id":"text-537","type":"text","heading":"","plain_text":"engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;batch&#39;)","html":"<p>engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#039;batch&#039;)</p>"},{"id":"text-538","type":"text","heading":"","plain_text":"Changed in version 1.3.7: &#8211; le use_batch_mode    flag has been superseded\nby a new parameter executemany_mode    which provides support both for\npsycopg2’s execute_batch    helper as well as the execute_values\nhelper.","html":"<p>Changed in version 1.3.7: &#8211; le use_batch_mode    flag has been superseded\nby a new parameter executemany_mode    which provides support both for\npsycopg2’s execute_batch    helper as well as the execute_values\nhelper.</p>"},{"id":"text-539","type":"text","heading":"","plain_text":"Possible options for executemany_mode    include:","html":"<p>Possible options for executemany_mode    include:</p>"},{"id":"text-540","type":"text","heading":"","plain_text":"None    &#8211; By default, psycopg2’s extensions are not used, and the usual\ncursor.executemany()    method is used when invoking batches of statements.","html":"<p>None    &#8211; By default, psycopg2’s extensions are not used, and the usual\ncursor.executemany()    method is used when invoking batches of statements.</p>"},{"id":"text-541","type":"text","heading":"","plain_text":"&#39;batch&#39;    &#8211; Uses psycopg2.extras.execute_batch    so that multiple copies\nof a SQL query, each one corresponding to a parameter set passed to\nexecutemany(), are joined into a single SQL string separated by a\nsemicolon.   This is the same behavior as was provided by the\nuse_batch_mode=True    flag.","html":"<p>&#039;batch&#039;    &#8211; Uses psycopg2.extras.execute_batch    so that multiple copies\nof a SQL query, each one corresponding to a parameter set passed to\nexecutemany(), are joined into a single SQL string separated by a\nsemicolon.   This is the same behavior as was provided by the\nuse_batch_mode=True    flag.</p>"},{"id":"text-542","type":"text","heading":"","plain_text":"&#39;values&#39;&#8211; For Core insert()    constructs only (including those\nemitted by the ORM automatically), the psycopg2.extras.execute_values\nextension is used so that multiple parameter sets are grouped into a single\nINSERT statement and joined together with multiple VALUES expressions. Cette\nmethod requires that the string text of the VALUES clause inside the\nINSERT statement is manipulated, so is only supported with a compiled\ninsert()    construct where the format is predictable.  For all other\nconstructs,  including plain textual INSERT statements not rendered  by the\nSQLAlchemy expression language compiler, the\npsycopg2.extras.execute_batch        method is used.   It is therefore important\nto note that “values” mode implies that “batch” mode is also used for\nall statements for which “values” mode does not apply.","html":"<p>&#039;values&#039;&#8211; For Core insert()    constructs only (including those\nemitted by the ORM automatically), the psycopg2.extras.execute_values\nextension is used so that multiple parameter sets are grouped into a single\nINSERT statement and joined together with multiple VALUES expressions. Cette\nmethod requires that the string text of the VALUES clause inside the\nINSERT statement is manipulated, so is only supported with a compiled\ninsert()    construct where the format is predictable.  For all other\nconstructs,  including plain textual INSERT statements not rendered  by the\nSQLAlchemy expression language compiler, the\npsycopg2.extras.execute_batch        method is used.   It is therefore important\nto note that “values” mode implies that “batch” mode is also used for\nall statements for which “values” mode does not apply.</p>"},{"id":"text-543","type":"text","heading":"","plain_text":"For both strategies, the executemany_batch_page_size    et\nexecutemany_values_page_size    arguments control how many parameter sets\nshould be represented in each execution.  Because “values” mode implies a\nfallback down to “batch” mode for non-INSERT statements, there are two\nindependent page size arguments.  For each, the default value of None    means\nto use psycopg2’s defaults, which at the time of this writing are quite low at\n100.   For the execute_values    method, a number as high as 10000 may prove\nto be performant, whereas for execute_batch, as the number represents\nfull statements repeated, a number closer to the default of 100 is likely\nmore appropriate:","html":"<p>For both strategies, the executemany_batch_page_size    et\nexecutemany_values_page_size    arguments control how many parameter sets\nshould be represented in each execution.  Because “values” mode implies a\nfallback down to “batch” mode for non-INSERT statements, there are two\nindependent page size arguments.  For each, the default value of None    means\nto use psycopg2’s defaults, which at the time of this writing are quite low at\n100.   For the execute_values    method, a number as high as 10000 may prove\nto be performant, whereas for execute_batch, as the number represents\nfull statements repeated, a number closer to the default of 100 is likely\nmore appropriate:</p>"},{"id":"text-544","type":"text","heading":"","plain_text":"engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;values&#39;,\n    executemany_values_page_size=10000, executemany_batch_page_size=500)","html":"<p>engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#039;values&#039;,\n    executemany_values_page_size=10000, executemany_batch_page_size=500)</p>"},{"id":"text-545","type":"text","heading":"","plain_text":"Changed in version 1.3.7: &#8211; Added support for\npsycopg2.extras.execute_values. le use_batch_mode    flag is\nsuperseded by the executemany_mode    flag.","html":"<p>Changed in version 1.3.7: &#8211; Added support for\npsycopg2.extras.execute_values. le use_batch_mode    flag is\nsuperseded by the executemany_mode    flag.</p>"},{"id":"text-546","type":"text","heading":"","plain_text":"Unicode with Psycopg2\nBy default, the psycopg2 driver uses the psycopg2.extensions.UNICODE\nextension, such that the DBAPI receives and returns all strings as Python\nUnicode objects directly &#8211; SQLAlchemy passes these values through without\nchange.   Psycopg2 here will encode/decode string values based on the\ncurrent “client encoding” setting; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf8, as a more useful default:","html":"<p>Unicode with Psycopg2\nBy default, the psycopg2 driver uses the psycopg2.extensions.UNICODE\nextension, such that the DBAPI receives and returns all strings as Python\nUnicode objects directly &#8211; SQLAlchemy passes these values through without\nchange.   Psycopg2 here will encode/decode string values based on the\ncurrent “client encoding” setting; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf8, as a more useful default:</p>"},{"id":"text-547","type":"text","heading":"","plain_text":"# postgresql.conf file","html":"<p># postgresql.conf file</p>"},{"id":"text-548","type":"text","heading":"","plain_text":"# client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8","html":"<p># client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8</p>"},{"id":"text-549","type":"text","heading":"","plain_text":"A second way to affect the client encoding is to set it within Psycopg2\nlocally.   SQLAlchemy will call psycopg2’s\npsycopg2:connection.set_client_encoding()    method\non all new connections based on the value passed to\ncreate_engine()    using the client_encoding    parameter:","html":"<p>A second way to affect the client encoding is to set it within Psycopg2\nlocally.   SQLAlchemy will call psycopg2’s\npsycopg2:connection.set_client_encoding()    method\non all new connections based on the value passed to\ncreate_engine()    using the client_encoding    parameter:</p>"},{"id":"text-550","type":"text","heading":"","plain_text":"# set_client_encoding() setting;\n# works for *all* PostgreSQL versions\nengine = create_engine(&quot;postgresql://user:pass@host/dbname&quot;,\n                       client_encoding=&#39;utf8&#39;)","html":"<p># set_client_encoding() setting;\n# works for *all* PostgreSQL versions\nengine = create_engine(&quot;postgresql://user:pass@host/dbname&quot;,\n                       client_encoding=&#039;utf8&#039;)</p>"},{"id":"text-551","type":"text","heading":"","plain_text":"This overrides the encoding specified in the PostgreSQL client configuration.\nWhen using the parameter in this way, the psycopg2 driver emits\nSET client_encoding TO &#39;utf8&#39;    on the connection explicitly, and works\nin all PostgreSQL versions.\nNote that the client_encoding    setting as passed to create_engine()\nest not the same as the more recently added client_encoding    parameter\nnow supported by libpq directly.   This is enabled when client_encoding\nis passed directly to psycopg2.connect(), and from SQLAlchemy is passed\nusing the create_engine.connect_args    parameter:","html":"<p>This overrides the encoding specified in the PostgreSQL client configuration.\nWhen using the parameter in this way, the psycopg2 driver emits\nSET client_encoding TO &#039;utf8&#039;    on the connection explicitly, and works\nin all PostgreSQL versions.\nNote that the client_encoding    setting as passed to create_engine()\nest not the same as the more recently added client_encoding    parameter\nnow supported by libpq directly.   This is enabled when client_encoding\nis passed directly to psycopg2.connect(), and from SQLAlchemy is passed\nusing the create_engine.connect_args    parameter:</p>"},{"id":"text-552","type":"text","heading":"","plain_text":"engine = create_engine(\n    &quot;postgresql://user:pass@host/dbname&quot;,\n    connect_args=&#39;client_encoding&#39;: &#39;utf8&#39;)","html":"<p>engine = create_engine(\n    &quot;postgresql://user:pass@host/dbname&quot;,\n    connect_args=&#039;client_encoding&#039;: &#039;utf8&#039;)</p>"},{"id":"text-553","type":"text","heading":"","plain_text":"# using the query string is equivalent\nengine = create_engine(&quot;postgresql://user:pass@host/dbname?client_encoding=utf8&quot;)","html":"<p># using the query string is equivalent\nengine = create_engine(&quot;postgresql://user:pass@host/dbname?client_encoding=utf8&quot;)</p>"},{"id":"text-554","type":"text","heading":"","plain_text":"The above parameter was only added to libpq as of version 9.1 of PostgreSQL,\nso using the previous method is better for cross-version support.","html":"<p>The above parameter was only added to libpq as of version 9.1 of PostgreSQL,\nso using the previous method is better for cross-version support.</p>"},{"id":"text-555","type":"text","heading":"","plain_text":"Disabling Native Unicode\nSQLAlchemy can also be instructed to skip the usage of the psycopg2\nUNICODE    extension and to instead utilize its own unicode encode/decode\nservices, which are normally reserved only for those DBAPIs that don’t\nfully support unicode directly.  Passing use_native_unicode=False    à\ncreate_engine()    will disable usage of psycopg2.extensions.UNICODE.\nSQLAlchemy will instead encode data itself into Python bytestrings on the way\nin and coerce from bytes on the way back,\nusing the value of the create_engine() encoding    parameter, which\ndefaults to utf-8.\nSQLAlchemy’s own unicode encode/decode functionality is steadily becoming\nobsolete as most DBAPIs now support unicode fully.","html":"<p>Disabling Native Unicode\nSQLAlchemy can also be instructed to skip the usage of the psycopg2\nUNICODE    extension and to instead utilize its own unicode encode/decode\nservices, which are normally reserved only for those DBAPIs that don’t\nfully support unicode directly.  Passing use_native_unicode=False    à\ncreate_engine()    will disable usage of psycopg2.extensions.UNICODE.\nSQLAlchemy will instead encode data itself into Python bytestrings on the way\nin and coerce from bytes on the way back,\nusing the value of the create_engine() encoding    parameter, which\ndefaults to utf-8.\nSQLAlchemy’s own unicode encode/decode functionality is steadily becoming\nobsolete as most DBAPIs now support unicode fully.</p>"},{"id":"text-556","type":"text","heading":"","plain_text":"Bound Parameter Styles\nThe default parameter style for the psycopg2 dialect is “pyformat”, where\nSQL is rendered using %(paramname)s    style.   This format has the limitation\nthat it does not accommodate the unusual case of parameter names that\nactually contain percent or parenthesis symbols; as SQLAlchemy in many cases\ngenerates bound parameter names based on the name of a column, the presence\nof these characters in a column name can lead to problems.\nThere are two solutions to the issue of a schema.Column    that contains\none of these characters in its name.  One is to specify the\nschema.Column.key    for columns that have such names:","html":"<p>Bound Parameter Styles\nThe default parameter style for the psycopg2 dialect is “pyformat”, where\nSQL is rendered using %(paramname)s    style.   This format has the limitation\nthat it does not accommodate the unusual case of parameter names that\nactually contain percent or parenthesis symbols; as SQLAlchemy in many cases\ngenerates bound parameter names based on the name of a column, the presence\nof these characters in a column name can lead to problems.\nThere are two solutions to the issue of a schema.Column    that contains\none of these characters in its name.  One is to specify the\nschema.Column.key    for columns that have such names:</p>"},{"id":"text-557","type":"text","heading":"","plain_text":"measurement = Table(&#39;measurement&#39;, metadata,\n    Column(&#39;Size (meters)&#39;, Integer, clé=&#39;size_meters&#39;)\n)","html":"<p>measurement = Table(&#039;measurement&#039;, metadata,\n    Column(&#039;Size (meters)&#039;, Integer, clé=&#039;size_meters&#039;)\n)</p>"},{"id":"text-558","type":"text","heading":"","plain_text":"Above, an INSERT statement such as measurement.insert()    will use\nsize_meters    as the parameter name, and a SQL expression such as\nmeasurement.c.size_meters &gt; dix    will derive the bound parameter name\nfrom the size_meters    key as well.","html":"<p>Above, an INSERT statement such as measurement.insert()    will use\nsize_meters    as the parameter name, and a SQL expression such as\nmeasurement.c.size_meters &gt; dix    will derive the bound parameter name\nfrom the size_meters    key as well.</p>"},{"id":"text-559","type":"text","heading":"","plain_text":"Changed in version 1.0.0: &#8211; SQL expressions will use Column.key\nas the source of naming when anonymous bound parameters are created\nin SQL expressions; previously, this behavior only applied to\nTable.insert()    et Table.update()    parameter names.","html":"<p>Changed in version 1.0.0: &#8211; SQL expressions will use Column.key\nas the source of naming when anonymous bound parameters are created\nin SQL expressions; previously, this behavior only applied to\nTable.insert()    et Table.update()    parameter names.</p>"},{"id":"text-560","type":"text","heading":"","plain_text":"The other solution is to use a positional format; psycopg2 allows use of the\n“format” paramstyle, which can be passed to\ncreate_engine.paramstyle:","html":"<p>The other solution is to use a positional format; psycopg2 allows use of the\n“format” paramstyle, which can be passed to\ncreate_engine.paramstyle:</p>"},{"id":"text-561","type":"text","heading":"","plain_text":"engine = create_engine(\n    &#39;postgresql://scott:tiger@localhost:5432/test&#39;, paramstyle=&#39;format&#39;)","html":"<p>engine = create_engine(\n    &#039;postgresql://scott:tiger@localhost:5432/test&#039;, paramstyle=&#039;format&#039;)</p>"},{"id":"text-562","type":"text","heading":"","plain_text":"With the above engine, instead of a statement like:","html":"<p>With the above engine, instead of a statement like:</p>"},{"id":"text-563","type":"text","heading":"","plain_text":"INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%(Taille (meters))s)\n&#39;Size (meters)&#39;: 1","html":"<p>INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%(Taille (meters))s)\n&#039;Size (meters)&#039;: 1</p>"},{"id":"text-564","type":"text","heading":"","plain_text":"we instead see:","html":"<p>we instead see:</p>"},{"id":"text-565","type":"text","heading":"","plain_text":"INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%s)\n(1, )","html":"<p>INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%s)\n(1, )</p>"},{"id":"text-566","type":"text","heading":"","plain_text":"Where above, the dictionary style is converted into a tuple with positional\nstyle.","html":"<p>Where above, the dictionary style is converted into a tuple with positional\nstyle.</p>"},{"id":"text-567","type":"text","heading":"","plain_text":"Transactions\nThe psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.","html":"<p>Transactions\nThe psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.</p>"},{"id":"text-568","type":"text","heading":"","plain_text":"Psycopg2 Transaction Isolation Level\nAs discussed in Transaction Isolation Level,\nall PostgreSQL dialects support setting of transaction isolation level\nboth via the isolation_level    parameter passed to create_engine(),\nas well as the isolation_level    argument used by\nConnection.execution_options(). When using the psycopg2 dialect, these\noptions make use of psycopg2’s set_isolation_level()    connection method,\nrather than emitting a PostgreSQL directive; this is because psycopg2’s\nAPI-level setting is always emitted at the start of each transaction in any\nCas.\nThe psycopg2 dialect supports these constants for isolation level:","html":"<p>Psycopg2 Transaction Isolation Level\nAs discussed in Transaction Isolation Level,\nall PostgreSQL dialects support setting of transaction isolation level\nboth via the isolation_level    parameter passed to create_engine(),\nas well as the isolation_level    argument used by\nConnection.execution_options(). When using the psycopg2 dialect, these\noptions make use of psycopg2’s set_isolation_level()    connection method,\nrather than emitting a PostgreSQL directive; this is because psycopg2’s\nAPI-level setting is always emitted at the start of each transaction in any\nCas.\nThe psycopg2 dialect supports these constants for isolation level:</p>"},{"id":"text-569","type":"text","heading":"","plain_text":"READ COMMITTED","html":"<p>READ COMMITTED</p>"},{"id":"text-570","type":"text","heading":"","plain_text":"READ UNCOMMITTED","html":"<p>READ UNCOMMITTED</p>"},{"id":"text-571","type":"text","heading":"","plain_text":"REPEATABLE READ","html":"<p>REPEATABLE READ</p>"},{"id":"text-572","type":"text","heading":"","plain_text":"SERIALIZABLE","html":"<p>SERIALIZABLE</p>"},{"id":"text-573","type":"text","heading":"","plain_text":"AUTOCOMMIT","html":"<p>AUTOCOMMIT</p>"},{"id":"text-574","type":"text","heading":"","plain_text":"NOTICE logging\nThe psycopg2 dialect will log PostgreSQL NOTICE messages\nvia the sqlalchemy.dialects.postgresql    logger.  When this logger\nis set to the logging.INFO    level, notice messages will be logged:","html":"<p>NOTICE logging\nThe psycopg2 dialect will log PostgreSQL NOTICE messages\nvia the sqlalchemy.dialects.postgresql    logger.  When this logger\nis set to the logging.INFO    level, notice messages will be logged:</p>"},{"id":"text-575","type":"text","heading":"","plain_text":"importation logging","html":"<p>importation logging</p>"},{"id":"text-576","type":"text","heading":"","plain_text":"logging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)","html":"<p>logging.getLogger(&#039;sqlalchemy.dialects.postgresql&#039;).setLevel(logging.INFO)</p>"},{"id":"text-577","type":"text","heading":"","plain_text":"Above, it is assumed that logging is configured externally.  If this is not\nthe case, configuration such as logging.basicConfig()    must be utilized:","html":"<p>Above, it is assumed that logging is configured externally.  If this is not\nthe case, configuration such as logging.basicConfig()    must be utilized:</p>"},{"id":"text-578","type":"text","heading":"","plain_text":"importation logging","html":"<p>importation logging</p>"},{"id":"text-579","type":"text","heading":"","plain_text":"logging.basicConfig()   # log messages to stdout\nlogging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)","html":"<p>logging.basicConfig()   # log messages to stdout\nlogging.getLogger(&#039;sqlalchemy.dialects.postgresql&#039;).setLevel(logging.INFO)</p>"},{"id":"text-580","type":"text","heading":"","plain_text":"HSTORE type\nle psycopg2    DBAPI includes an extension to natively handle marshalling of\nthe HSTORE type.   The SQLAlchemy psycopg2 dialect will enable this extension\nby default when psycopg2 version 2.4 or greater is used, and\nit is detected that the target database has the HSTORE type set up for use.\nIn other words, when the dialect makes the first\nconnection, a sequence like the following is performed:","html":"<p>HSTORE type\nle psycopg2    DBAPI includes an extension to natively handle marshalling of\nthe HSTORE type.   The SQLAlchemy psycopg2 dialect will enable this extension\nby default when psycopg2 version 2.4 or greater is used, and\nit is detected that the target database has the HSTORE type set up for use.\nIn other words, when the dialect makes the first\nconnection, a sequence like the following is performed:</p>"},{"id":"text-581","type":"text","heading":"","plain_text":"Request the available HSTORE oids using\npsycopg2.extras.HstoreAdapter.get_oids().\nIf this function returns a list of HSTORE identifiers, we then determine\nque le HSTORE    extension is present.\nThis function is skipped if the version of psycopg2 installed is\nless than version 2.4.","html":"<p>Request the available HSTORE oids using\npsycopg2.extras.HstoreAdapter.get_oids().\nIf this function returns a list of HSTORE identifiers, we then determine\nque le HSTORE    extension is present.\nThis function is skipped if the version of psycopg2 installed is\nless than version 2.4.</p>"},{"id":"text-582","type":"text","heading":"","plain_text":"If the use_native_hstore    flag is at its default of True, et\nwe’ve detected that HSTORE    oids are available, the\npsycopg2.extensions.register_hstore()    extension is invoked for all\nles liaisons.","html":"<p>If the use_native_hstore    flag is at its default of True, et\nwe’ve detected that HSTORE    oids are available, the\npsycopg2.extensions.register_hstore()    extension is invoked for all\nles liaisons.</p>"},{"id":"text-583","type":"text","heading":"","plain_text":"le register_hstore()    extension has the effect of all Python\ndictionaries being accepted as parameters regardless of the type of target\ncolumn in SQL. The dictionaries are converted by this extension into a\ntextual HSTORE expression.  If this behavior is not desired, disable the\nuse of the hstore extension by setting use_native_hstore    à Faux    comme\nfollows:","html":"<p>le register_hstore()    extension has the effect of all Python\ndictionaries being accepted as parameters regardless of the type of target\ncolumn in SQL. The dictionaries are converted by this extension into a\ntextual HSTORE expression.  If this behavior is not desired, disable the\nuse of the hstore extension by setting use_native_hstore    à Faux    comme\nfollows:</p>"},{"id":"text-584","type":"text","heading":"","plain_text":"engine = create_engine(&quot;postgresql+psycopg2://scott:tiger@localhost/test&quot;,\n            use_native_hstore=Faux)","html":"<p>engine = create_engine(&quot;postgresql+psycopg2://scott:tiger@localhost/test&quot;,\n            use_native_hstore=Faux)</p>"},{"id":"text-585","type":"text","heading":"","plain_text":"le HSTORE    type is still supported when the\npsycopg2.extensions.register_hstore()    extension is not used.  It merely\nmeans that the coercion between Python dictionaries and the HSTORE\nstring format, on both the parameter side and the result side, will take\nplace within SQLAlchemy’s own marshalling logic, and not that of psycopg2\nwhich may be more performant.","html":"<p>le HSTORE    type is still supported when the\npsycopg2.extensions.register_hstore()    extension is not used.  It merely\nmeans that the coercion between Python dictionaries and the HSTORE\nstring format, on both the parameter side and the result side, will take\nplace within SQLAlchemy’s own marshalling logic, and not that of psycopg2\nwhich may be more performant.</p>"},{"id":"text-586","type":"text","heading":"","plain_text":"pg8000\nSupport for the PostgreSQL database via the pg8000 driver.","html":"<p>pg8000\nSupport for the PostgreSQL database via the pg8000 driver.</p>"},{"id":"text-587","type":"text","heading":"","plain_text":"DBAPI\nDocumentation and download information (if applicable) for pg8000 is available at:\nhttps://pythonhosted.org/pg8000/","html":"<p>DBAPI\nDocumentation and download information (if applicable) for pg8000 is available at:\nhttps://pythonhosted.org/pg8000/</p>"},{"id":"text-588","type":"text","heading":"","plain_text":"Connecting\nConnect String:","html":"<p>Connecting\nConnect String:</p>"},{"id":"text-589","type":"text","heading":"","plain_text":"postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]","html":"<p>postgresql+pg8000://user:password@host:port/dbname[?key=value&amp;key=value...]</p>"},{"id":"text-590","type":"text","heading":"","plain_text":"Remarque\nThe pg8000 dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.","html":"<p>Remarque\nThe pg8000 dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.</p>"},{"id":"text-591","type":"text","heading":"","plain_text":"Unicode\npg8000 will encode / decode string values between it and the server using the\nPostgreSQL client_encoding    parameter; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf-8, as a more useful default:","html":"<p>Unicode\npg8000 will encode / decode string values between it and the server using the\nPostgreSQL client_encoding    parameter; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf-8, as a more useful default:</p>"},{"id":"text-592","type":"text","heading":"","plain_text":"#client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8","html":"<p>#client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8</p>"},{"id":"text-593","type":"text","heading":"","plain_text":"le client_encoding    can be overridden for a session by executing the SQL:\nSET CLIENT_ENCODING TO ‘utf8’;\nSQLAlchemy will execute this SQL on all new connections based on the value\npassed to create_engine()    using the client_encoding    parameter:","html":"<p>le client_encoding    can be overridden for a session by executing the SQL:\nSET CLIENT_ENCODING TO ‘utf8’;\nSQLAlchemy will execute this SQL on all new connections based on the value\npassed to create_engine()    using the client_encoding    parameter:</p>"},{"id":"text-594","type":"text","heading":"","plain_text":"engine = create_engine(\n    &quot;postgresql+pg8000://user:pass@host/dbname&quot;, client_encoding=&#39;utf8&#39;)","html":"<p>engine = create_engine(\n    &quot;postgresql+pg8000://user:pass@host/dbname&quot;, client_encoding=&#039;utf8&#039;)</p>"},{"id":"text-595","type":"text","heading":"","plain_text":"pg8000 Transaction Isolation Level\nThe pg8000 dialect offers the same isolation level settings as that\nof the psycopg2 dialect:","html":"<p>pg8000 Transaction Isolation Level\nThe pg8000 dialect offers the same isolation level settings as that\nof the psycopg2 dialect:</p>"},{"id":"text-596","type":"text","heading":"","plain_text":"READ COMMITTED","html":"<p>READ COMMITTED</p>"},{"id":"text-597","type":"text","heading":"","plain_text":"READ UNCOMMITTED","html":"<p>READ UNCOMMITTED</p>"},{"id":"text-598","type":"text","heading":"","plain_text":"REPEATABLE READ","html":"<p>REPEATABLE READ</p>"},{"id":"text-599","type":"text","heading":"","plain_text":"SERIALIZABLE","html":"<p>SERIALIZABLE</p>"},{"id":"text-600","type":"text","heading":"","plain_text":"AUTOCOMMIT","html":"<p>AUTOCOMMIT</p>"},{"id":"text-601","type":"text","heading":"","plain_text":"New in version 0.9.5: support for AUTOCOMMIT isolation level when using\npg8000.","html":"<p>New in version 0.9.5: support for AUTOCOMMIT isolation level when using\npg8000.</p>"},{"id":"text-602","type":"text","heading":"","plain_text":"psycopg2cffi\nSupport for the PostgreSQL database via the psycopg2cffi driver.","html":"<p>psycopg2cffi\nSupport for the PostgreSQL database via the psycopg2cffi driver.</p>"},{"id":"text-603","type":"text","heading":"","plain_text":"DBAPI\nDocumentation and download information (if applicable) for psycopg2cffi is available at:\nhttp://pypi.python.org/pypi/psycopg2cffi/","html":"<p>DBAPI\nDocumentation and download information (if applicable) for psycopg2cffi is available at:\nhttp://pypi.python.org/pypi/psycopg2cffi/</p>"},{"id":"text-604","type":"text","heading":"","plain_text":"Connecting\nConnect String:","html":"<p>Connecting\nConnect String:</p>"},{"id":"text-605","type":"text","heading":"","plain_text":"postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]","html":"<p>postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&amp;key=value...]</p>"},{"id":"text-606","type":"text","heading":"","plain_text":"psycopg2cffi    is an adaptation of psycopg2, using CFFI for the C\ncouche. This makes it suitable for use in e.g. PyPy. Documentation\nis as per psycopg2.","html":"<p>psycopg2cffi    is an adaptation of psycopg2, using CFFI for the C\ncouche. This makes it suitable for use in e.g. PyPy. Documentation\nis as per psycopg2.</p>"},{"id":"text-607","type":"text","heading":"","plain_text":"py-postgresql\nSupport for the PostgreSQL database via the py-postgresql driver.","html":"<p>py-postgresql\nSupport for the PostgreSQL database via the py-postgresql driver.</p>"},{"id":"text-608","type":"text","heading":"","plain_text":"DBAPI\nDocumentation and download information (if applicable) for py-postgresql is available at:\nhttp://python.projects.pgfoundry.org/","html":"<p>DBAPI\nDocumentation and download information (if applicable) for py-postgresql is available at:\nhttp://python.projects.pgfoundry.org/</p>"},{"id":"text-609","type":"text","heading":"","plain_text":"Connecting\nConnect String:","html":"<p>Connecting\nConnect String:</p>"},{"id":"text-610","type":"text","heading":"","plain_text":"postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]","html":"<p>postgresql+pypostgresql://user:password@host:port/dbname[?key=value&amp;key=value...]</p>"},{"id":"text-611","type":"text","heading":"","plain_text":"Remarque\nThe pypostgresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndriver is psycopg2.","html":"<p>Remarque\nThe pypostgresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndriver is psycopg2.</p>"},{"id":"text-612","type":"text","heading":"","plain_text":"pygresql\nSupport for the PostgreSQL database via the pygresql driver.","html":"<p>pygresql\nSupport for the PostgreSQL database via the pygresql driver.</p>"},{"id":"text-613","type":"text","heading":"","plain_text":"DBAPI\nDocumentation and download information (if applicable) for pygresql is available at:\nhttp://www.pygresql.org/","html":"<p>DBAPI\nDocumentation and download information (if applicable) for pygresql is available at:\nhttp://www.pygresql.org/</p>"},{"id":"text-614","type":"text","heading":"","plain_text":"Connecting\nConnect String:","html":"<p>Connecting\nConnect String:</p>"},{"id":"text-615","type":"text","heading":"","plain_text":"postgresql+pygresql://user:password@host:port/dbname[?key=value&key=value...]","html":"<p>postgresql+pygresql://user:password@host:port/dbname[?key=value&amp;key=value...]</p>"},{"id":"text-616","type":"text","heading":"","plain_text":"Remarque\nThe pygresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.","html":"<p>Remarque\nThe pygresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2.</p>"},{"id":"text-617","type":"text","heading":"","plain_text":"zxjdbc\nSupport for the PostgreSQL database via the zxJDBC for Jython driver.","html":"<p>zxjdbc\nSupport for the PostgreSQL database via the zxJDBC for Jython driver.</p>"},{"id":"text-618","type":"text","heading":"","plain_text":"DBAPI\nDrivers for this database are available at:\nhttp://jdbc.postgresql.org/","html":"<p>DBAPI\nDrivers for this database are available at:\nhttp://jdbc.postgresql.org/</p>"},{"id":"text-619","type":"text","heading":"","plain_text":"Connecting\nConnect String:","html":"<p>Connecting\nConnect String:</p>"},{"id":"text-620","type":"text","heading":"","plain_text":"postgresql+zxjdbc://scott:tiger@localhost/db","html":"<p>postgresql+zxjdbc://scott:tiger@localhost/db</p>"},{"id":"text-621","type":"text","heading":"","plain_text":"Click to rate this post!\n                                   \n                               [Total: 0  Average: 0]","html":"<p>Click to rate this post!\n                                   \n                               [Total: 0  Average: 0]</p>"}],"sections":[{"id":"text-1","heading":"Text","content":"Prise en charge de la base de données PostgreSQL.\nPrise en charge DBAPI\nLes options dialect / DBAPI suivantes sont disponibles. Veuillez vous reporter aux sections individuelles de DBAPI pour obtenir des informations sur la connexion."},{"id":"text-2","heading":"Text","content":"Séquences / SERIAL / IDENTITY\nPostgreSQL supporte les séquences et SQLAlchemy les utilise par défaut\nde créer de nouvelles valeurs de clé primaire pour les colonnes de clé primaire basées sur des nombres entiers. Quand\ncréer des tables, SQLAlchemy va publier le EN SÉRIE    type de données pour\ncolonnes de clé primaire basées sur des nombres entiers, qui génèrent une séquence et un côté serveur\ndéfaut correspondant à la colonne.\nPour spécifier une séquence nommée spécifique à utiliser pour la génération de clé primaire,\nUtilisez le Séquence()    construction:"},{"id":"text-3","heading":"Text","content":"Table(&#39;quelque chose&#39;, métadonnées,\n        Colonne(&#39;id&#39;, Entier, Séquence(&#39;some_id_seq&#39;), clé primaire=Vrai)\n    )"},{"id":"text-4","heading":"Text","content":"Lorsque SQLAlchemy émet une seule instruction INSERT, pour remplir le contrat de\nayant le &quot;dernier identifiant d&#39;insertion&quot; disponible, une clause RETURNING est ajoutée à\nl&#39;instruction INSERT qui spécifie les colonnes de clé primaire doit être\nretourné une fois la déclaration terminée. La fonctionnalité RETURNING ne prend que\nplace si PostgreSQL 8.2 ou version ultérieure est utilisé. Dans une approche de repli, le\nséquence, spécifiée explicitement ou implicitement via EN SÉRIE, est\npréalablement exécutée indépendamment, la valeur renvoyée à utiliser dans la\ninsertion ultérieure. Notez que lorsqu&#39;un\ninsérer()    la construction est exécutée en utilisant\nSémantique «executemany», la fonctionnalité «dernier identifiant inséré» ne\nappliquer; aucune clause RETURNING n’est émise et la séquence n’a pas été pré-exécutée dans cette\nCas.\nPour forcer l&#39;utilisation de RETURNING par défaut, spécifiez l&#39;indicateur.\nimplicit_returning = False    à create_engine ()."},{"id":"text-5","heading":"Text","content":"Colonnes PostgreSQL 10 IDENTITY\nPostgreSQL 10 a une nouvelle fonctionnalité IDENTITY qui remplace l’utilisation de SERIAL.\nLe support intégré pour le rendu de IDENTITY n’est pas encore disponible, mais le\nle crochet de compilation suivant peut être utilisé pour remplacer les occurrences de SERIAL par\nIDENTITÉ:"},{"id":"text-6","heading":"Text","content":"de sqlalchemy.schema importation CreateColumn\nde sqlalchemy.ext.compiler importation compile"},{"id":"text-7","heading":"Text","content":"@compiles(CreateColumn, &#39;postgresql&#39;)\ndef use_identity(élément, compilateur, **kw):\n    texte = compilateur.visit_create_column(élément, **kw)\n    texte = texte.remplacer(&quot;EN SÉRIE&quot;, &quot;INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ&quot;)\n    revenir texte"},{"id":"text-8","heading":"Text","content":"En utilisant ce qui précède, un tableau tel que:"},{"id":"text-9","heading":"Text","content":"t = Table(\n    &#39;t&#39;, m,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, Chaîne)\n)"},{"id":"text-10","heading":"Text","content":"Générera sur la base de données de sauvegarde en tant que:"},{"id":"text-11","heading":"Text","content":"CRÉER TABLE t (\n    identifiant INT GÉNÉRÉ PAR DÉFAUT COMME IDENTITÉ NE PAS NUL,\n    Les données VARCHAR,\n    PRIMAIRE CLÉ (identifiant)\n)"},{"id":"text-12","heading":"Text","content":"Niveau d&#39;isolation de la transaction\nTous les dialectes PostgreSQL supportent la définition du niveau d&#39;isolation des transactions\nà la fois via un paramètre spécifique au dialecte\ncreate_engine.isolation_level    accepté par create_engine (),\naussi bien que Connection.execution_options.isolation_level\nargument passé à Connection.execution_options ().\nLors de l’utilisation d’un dialecte autre que psycopg2, cette fonction fonctionne en lançant la commande\nENSEMBLE SESSION LES CARACTÉRISTIQUES COMME TRANSACTION ISOLEMENT NIVEAU     pour\nchaque nouvelle connexion. Pour le niveau d&#39;isolement AUTOCOMMIT spécial,\nDes techniques spécifiques à DBAPI sont utilisées.\nPour définir le niveau d&#39;isolement à l&#39;aide de create_engine ():"},{"id":"text-13","heading":"Text","content":"moteur = create_engine(\n    &quot;postgresql + pg8000: // scott: tiger @ localhost / test&quot;,\n    niveau_isolement=&quot;READ UNCOMMITTED&quot;\n)"},{"id":"text-14","heading":"Text","content":"Pour définir à l&#39;aide des options d&#39;exécution par connexion:"},{"id":"text-15","heading":"Text","content":"lien = moteur.relier()\nlien = lien.execution_options(\n    niveau_isolement=&quot;LIRE ENGAGÉ&quot;\n)"},{"id":"text-16","heading":"Text","content":"Valeurs valides pour niveau_isolement    comprendre:"},{"id":"text-17","heading":"Text","content":"Introspection de la table de schémas distants et chemin de recherche PostgreSQL\nTL; DR;: garder le chemin_recherche    variable définie à sa valeur par défaut de Publique,\nnommer des schémas autre que Publique    explicitement dans Table    définitions.\nLe dialecte PostgreSQL peut refléter les tables de n’importe quel schéma. le\nTable.schema    argument, ou bien la\nMetaData.reflect.schema    l&#39;argument détermine quel schéma sera\nêtre recherché pour la ou les tables. Le reflété Table    objets\nconservera dans tous les cas cette .schéma    attribut comme spécifié.\nCependant, en ce qui concerne les tableaux que ces Table    les objets font référence à\nvia une contrainte de clé étrangère, une décision doit être prise quant à la .schéma\nest représenté dans ces tables distantes, dans le cas où cette distance\nnom de schéma est également un membre du courant\nChemin de recherche PostgreSQL.\nPar défaut, le dialecte PostgreSQL reproduit le comportement encouragé par\nPostgreSQL propre pg_get_constraintdef ()    procédure intégrée. Cette fonction\nrenvoie un exemple de définition pour une contrainte de clé étrangère particulière,\nomettant le nom de schéma référencé de cette définition lorsque le nom est\négalement dans le chemin de recherche du schéma PostgreSQL. L&#39;interaction ci-dessous\nillustre ce comportement:"},{"id":"text-18","heading":"Text","content":"tester=&gt; CRÉER TABLE test_schema.référé(identifiant ENTIER PRIMAIRE CLÉ)\nCRÉER TABLE\ntester=&gt; CRÉER TABLE référant(\ntester(&gt;         identifiant ENTIER PRIMAIRE CLÉ,\ntester(&gt;         id_référé ENTIER RÉFÉRENCES test_schema.référé(identifiant));\nCRÉER TABLE\ntester=&gt; ENSEMBLE chemin_recherche À Publique, test_schema;\ntester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;\ntester-&gt; ;\n               pg_get_constraintdef\n-------------------------------------------------- -\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES référé(identifiant)\n(1 rangée)"},{"id":"text-19","heading":"Text","content":"Ci-dessus, nous avons créé une table référé    en tant que membre du schéma distant\ntest_schemaCependant, lorsque nous avons ajouté test_schema    à la\nPG chemin_recherche    et ensuite demandé pg_get_constraintdef ()    pour le\nÉTRANGER CLÉ    syntaxe, test_schema    n&#39;a pas été inclus dans la sortie de\nla fonction.\nD&#39;autre part, si nous redéfinissons le chemin de recherche sur la valeur par défaut typique\nde Publique:"},{"id":"text-20","heading":"Text","content":"tester=&gt; ENSEMBLE chemin_recherche À Publique;\nENSEMBLE"},{"id":"text-21","heading":"Text","content":"La même requête contre pg_get_constraintdef ()    retourne maintenant complètement\nnom qualifié du schéma pour nous:"},{"id":"text-22","heading":"Text","content":"tester=&gt; SÉLECTIONNER pg_catalog.pg_get_constraintdef(r.oid, vrai) DE\ntester-&gt; pg_catalog.pg_class c JOINDRE pg_catalog.pg_namespace n\ntester-&gt; SUR n.oid = c.espace de noms\ntester-&gt; JOINDRE pg_catalog.pg_constraint r  SUR c.oid = r.conrelide\ntester-&gt; OÙ c.nom de famille=&#39;référant&#39; ET r.contype = &#39;F&#39;;\n                     pg_get_constraintdef\n-------------------------------------------------- -------------\n ÉTRANGER CLÉ (id_référé) RÉFÉRENCES test_schema.référé(identifiant)\n(1 rangée)"},{"id":"text-23","heading":"Text","content":"SQLAlchemy utilisera par défaut la valeur de retour de pg_get_constraintdef ()\nafin de déterminer le nom du schéma distant. C’est-à-dire si notre chemin_recherche\nont été mis à inclure test_schemaet nous avons invoqué une table\nprocessus de réflexion comme suit:"},{"id":"text-24","heading":"Text","content":"&gt;&gt;&gt; de sqlalchemy importation Table, MetaData, create_engine\n&gt;&gt;&gt; moteur = create_engine(&quot;postgresql: // scott: tiger @ localhost / test&quot;)\n&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta,\n...                       chargement automatique=Vrai, autoload_with=Connecticut)\n..."},{"id":"text-25","heading":"Text","content":"Le processus ci-dessus fournirait à la MetaData.tables    collection\nréféré    table nommée sans pour autant le schéma:"},{"id":"text-26","heading":"Text","content":"&gt;&gt;&gt; méta.les tables[[[[&#39;référé&#39;].schéma est Aucun\nVrai"},{"id":"text-27","heading":"Text","content":"Pour modifier le comportement de la réflexion de sorte que le schéma référencé soit\nmaintenu indépendamment de la chemin_recherche    réglage, utilisez le\npostgresql_ignore_search_path    option, qui peut être spécifiée en tant que\nargument spécifique au dialecte à la fois Table    aussi bien que\nMetaData.reflect ():"},{"id":"text-28","heading":"Text","content":"&gt;&gt;&gt; avec moteur.relier() comme Connecticut:\n...     Connecticut.exécuter(&quot;SET search_path TO test_schema, public&quot;)\n...     méta = MetaData()\n...     référant = Table(&#39;référant&#39;, méta, chargement automatique=Vrai,\n...                       autoload_with=Connecticut,\n...                       postgresql_ignore_search_path=Vrai)\n..."},{"id":"text-29","heading":"Text","content":"Nous allons maintenant avoir test_schema.referred    stocké comme qualifié de schéma:"},{"id":"text-30","heading":"Text","content":"&gt;&gt;&gt; méta.les tables[[[[&#39;test_schema.referred&#39;].schéma\n&#39;test_schema&#39;"},{"id":"text-31","heading":"Text","content":"Notez que dans tous les cas, le schéma «par défaut» est toujours reflété comme\nAucun. Le schéma «par défaut» sur PostgreSQL est celui qui est renvoyé par le\nPostgreSQL current_schema ()    une fonction. Sur un PostgreSQL typique\nl&#39;installation, c&#39;est le nom Publique. Donc, un tableau qui fait référence à un autre\nqui est dans le Publique    (c&#39;est-à-dire par défaut) le schéma aura toujours le\n.schéma    attribut mis à Aucun."},{"id":"text-32","heading":"Text","content":"Nouveau dans la version 0.9.2: Ajouté le postgresql_ignore_search_path\noption dialecte acceptée par Table    et\nMetaData.reflect ()."},{"id":"text-33","heading":"Text","content":"INSERT / UPDATE… RETOURNER\nLe dialecte supporte les PG 8.2 INSERT..RECLINANT, MISE À JOUR..RECLINANT    et\nSUPPRIMER .. RETOURNER    syntaxes.   INSERT..RECLINANT    est utilisé par défaut\npour les instructions INSERT à une seule ligne afin d&#39;extraire les données nouvellement générées\nidentificateurs de clé primaire. Pour spécifier un explicite RETOUR    clause,\nUtilisez le _UpdateBase.returning ()    méthode par déclaration:"},{"id":"text-34","heading":"Text","content":"# INSERT..RETURNING\nrésultat = table.insérer().rentrant(table.c.col1, table.c.col2).\n    valeurs(Nom=&#39;foo&#39;)\nimpression résultat.fetchall()"},{"id":"text-35","heading":"Text","content":"# UPDATE..RETURNING\nrésultat = table.mise à jour().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;).valeurs(Nom=&#39;bar&#39;)\nimpression résultat.fetchall()"},{"id":"text-36","heading":"Text","content":"# DELETE..RETURNING\nrésultat = table.effacer().rentrant(table.c.col1, table.c.col2).\n    où(table.c.Nom==&#39;foo&#39;)\nimpression résultat.fetchall()"},{"id":"text-37","heading":"Text","content":"INSERT… SUR CONFLICT (Upsert)\nA partir de la version 9.5, PostgreSQL permet les «upserts» (mises à jour ou insert) de\nlignes dans une table via le SUR CONFLIT    clause de la INSÉRER    déclaration. UNE\nLa ligne candidate ne sera insérée que si cette ligne ne viole aucun code unique.\ncontraintes. Dans le cas d’une violation de contrainte unique, une action secondaire\npeut être soit “DO UPDATE”, indiquant que les données dans le fichier\nla ligne cible doit être mise à jour, ou “NE RIEN FAIRE”, ce qui indique de sauter silencieusement\ncette rangée.\nLes conflits sont déterminés à l&#39;aide de contraintes et d&#39;index uniques existants. Celles-ci\nles contraintes peuvent être identifiées en utilisant leur nom comme indiqué dans DDL,\nou ils peuvent être inféré en indiquant les colonnes et les conditions qui composent\nles index.\nSQLAlchemy fournit SUR CONFLIT    support via le spécifique PostgreSQL\npostgresql.dml.insert ()    fonction, qui fournit\nles méthodes génératives on_conflict_do_update ()\net on_conflict_do_nothing ():"},{"id":"text-38","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-39","heading":"Text","content":"insert_stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_existing_id&#39;,\n    Les données=&#39;valeur insérée&#39;)"},{"id":"text-40","heading":"Text","content":"do_nothing_stmt = insert_stmt.on_conflict_do_nothing(\n    éléments_index=[[[[&#39;id&#39;]\n)"},{"id":"text-41","heading":"Text","content":"Connecticut.exécuter(do_nothing_stmt)"},{"id":"text-42","heading":"Text","content":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;pk_my_table&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)"},{"id":"text-43","heading":"Text","content":"Connecticut.exécuter(do_update_stmt)"},{"id":"text-44","heading":"Text","content":"Les deux méthodes fournissent la &quot;cible&quot; du conflit en utilisant soit la\ncontrainte nommée ou par inférence de colonne:"},{"id":"text-45","heading":"Text","content":"le Insert.on_conflict_do_update.index_elements    argument\nspécifie une séquence contenant des noms de colonne de chaîne, Colonne\ndes objets, et / ou des éléments d’expression SQL, qui identifieraient un unique\nindice:"},{"id":"text-46","heading":"Text","content":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)"},{"id":"text-47","heading":"Text","content":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.identifiant],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)"},{"id":"text-48","heading":"Text","content":"Lors de l&#39;utilisation Insert.on_conflict_do_update.index_elements    à\ndéduire un index, un index partiel peut être déduit en spécifiant également le\nUtilisez le Insert.on_conflict_do_update.index_where    paramètre:"},{"id":"text-49","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-50","heading":"Text","content":"stmt = insérer(ma table).valeurs(utilisateur_email=&#39;a@b.com&#39;, Les données=&#39;données insérées&#39;)\nstmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[ma table.c.utilisateur_email],\n    index_where=ma table.c.utilisateur_email.comme(&#39;%@gmail.com&#39;),\n    ensemble_=dict(Les données=stmt.exclu.Les données)\n    )\nConnecticut.exécuter(stmt)"},{"id":"text-51","heading":"Text","content":"le Insert.on_conflict_do_update.constraint    l&#39;argument est\nutilisé pour spécifier directement un index plutôt que de l&#39;inférer. Cela peut être\nle nom d&#39;une contrainte UNIQUE, d&#39;une contrainte PRIMARY KEY ou d&#39;un INDEX:"},{"id":"text-52","heading":"Text","content":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_idx_1&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)"},{"id":"text-53","heading":"Text","content":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=&#39;ma_table_pk&#39;,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)"},{"id":"text-54","heading":"Text","content":"le Insert.on_conflict_do_update.constraint    argument peut\nse référer également à une construction SQLAlchemy représentant une contrainte,\npar exemple. Contrainte unique, PrimaryKeyConstraint,\nIndice, ou ExcludeConstraint. Dans cette utilisation,\nsi la contrainte a un nom, elle est utilisée directement. Sinon, si le\ncontrainte est non nommée, alors l’inférence sera utilisée, où les expressions\net la clause optionnelle WHERE de la contrainte sera précisée dans le\nconstruction. Cette utilisation est particulièrement pratique\nfaire référence à la clé primaire nommée ou non nommée d&#39;un Table    en utilisant le\nTable.primary_key    attribut:"},{"id":"text-55","heading":"Text","content":"do_update_stmt = insert_stmt.on_conflict_do_update(\n    contrainte=ma table.clé primaire,\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n)"},{"id":"text-56","heading":"Text","content":"SUR CONFLIT ... FAIRE MISE À JOUR    est utilisé pour effectuer une mise à jour du déjà\nligne existante, en utilisant n&#39;importe quelle combinaison de nouvelles valeurs ainsi que de valeurs\nde l&#39;insertion proposée. Ces valeurs sont spécifiées à l&#39;aide du\nInsert.on_conflict_do_update.set_    paramètre. Cette\nparamètre accepte un dictionnaire composé de valeurs directes\npour UPDATE:"},{"id":"text-57","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-58","heading":"Text","content":"stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;)\n    )\nConnecticut.exécuter(do_update_stmt)"},{"id":"text-59","heading":"Text","content":"Pour faire référence à la ligne d’insertion proposée, l’alias spécial\nexclu    est disponible en tant qu&#39;attribut sur\nle postgresql.dml.Insert    objet; cet objet est un\nColumnCollection    lequel alias contient toutes les colonnes de la cible\ntable:"},{"id":"text-60","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-61","heading":"Text","content":"stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\ndo_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    )\nConnecticut.exécuter(do_update_stmt)"},{"id":"text-62","heading":"Text","content":"le Insert.on_conflict_do_update ()    méthode accepte également\nune clause WHERE utilisant le Insert.on_conflict_do_update.where\nparamètre, qui limitera les lignes qui reçoivent un UPDATE:"},{"id":"text-63","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-64","heading":"Text","content":"stmt = insérer(ma table).valeurs(\n    identifiant=&#39;some_id&#39;,\n    Les données=&#39;valeur insérée&#39;,\n    auteur=&#39;jlh&#39;)\non_update_stmt = stmt.on_conflict_do_update(\n    éléments_index=[[[[&#39;id&#39;],\n    ensemble_=dict(Les données=&#39;valeur mise à jour&#39;, auteur=stmt.exclu.auteur)\n    où=(ma table.c.statut == 2)\n    )\nConnecticut.exécuter(on_update_stmt)"},{"id":"text-65","heading":"Text","content":"SUR CONFLIT    peut également être utilisé pour ignorer l&#39;insertion complète d&#39;une ligne\nen cas de conflit avec une contrainte unique ou d&#39;exclusion; au dessous de\nceci est illustré en utilisant le\non_conflict_do_nothing ()    méthode:"},{"id":"text-66","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-67","heading":"Text","content":"stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing(éléments_index=[[[[&#39;id&#39;])\nConnecticut.exécuter(stmt)"},{"id":"text-68","heading":"Text","content":"Si FAIRE RIEN    est utilisé sans spécifier de colonne ou de contrainte,\nil a pour effet de sauter l&#39;INSERT pour toute exception unique ou d&#39;exclusion\nviolation de contrainte qui se produit:"},{"id":"text-69","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation insérer"},{"id":"text-70","heading":"Text","content":"stmt = insérer(ma table).valeurs(identifiant=&#39;some_id&#39;, Les données=&#39;valeur insérée&#39;)\nstmt = stmt.on_conflict_do_nothing()\nConnecticut.exécuter(stmt)"},{"id":"text-71","heading":"Text","content":"Nouveau dans la version 1.1: Ajout du support pour les clauses PostgreSQL ™ ON CONFLICT"},{"id":"text-72","heading":"Text","content":"Recherche en texte intégral\nSQLAlchemy met à disposition le PostgreSQL @@    opérateur via le\nColumnElement.match ()    méthode sur toute expression de colonne textuelle.\nSur un dialecte PostgreSQL, une expression comme celle-ci:"},{"id":"text-73","heading":"Text","content":"sélectionner([[[[quelque chose.c.texte.rencontre(&quot;chaîne de recherche&quot;)])"},{"id":"text-74","heading":"Text","content":"émettra dans la base de données:"},{"id":"text-75","heading":"Text","content":"SÉLECTIONNER texte @@ to_tsquery(&#39;chaîne de recherche&#39;) DE table"},{"id":"text-76","heading":"Text","content":"Les fonctions de recherche de texte PostgreSQL telles que to_tsquery ()\net to_tsvector ()    sont disponibles\nen utilisant explicitement la norme func    construction. Par exemple:"},{"id":"text-77","heading":"Text","content":"sélectionner([[[[\n    func.to_tsvector(&quot;les gros chats mangeaient des rats&quot;).rencontre(&#39;chat et rat&#39;)\n])"},{"id":"text-78","heading":"Text","content":"Emet l&#39;équivalent de:"},{"id":"text-79","heading":"Text","content":"SÉLECTIONNER to_tsvector(&quot;les gros chats mangeaient des rats&quot;) @@ to_tsquery(&#39;chat et rat&#39;)"},{"id":"text-80","heading":"Text","content":"le postgresql.TSVECTOR    type peut fournir des CAST explicites:"},{"id":"text-81","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation TSVECTOR\nde sqlalchemy importation sélectionner, jeter\nsélectionner([[[[jeter(&quot;Du texte&quot;, TSVECTOR)])"},{"id":"text-82","heading":"Text","content":"produit une déclaration équivalente à:"},{"id":"text-83","heading":"Text","content":"SÉLECTIONNER JETER(&#39;Du texte&#39; COMME TSVECTOR) COMME anon_1"},{"id":"text-84","heading":"Text","content":"Les recherches en texte intégral dans PostgreSQL sont influencées par la combinaison de:\nParamètre PostgreSQL de default_text_search_config, le regconfig    utilisé\npour construire les index GIN / GiST, et le regconfig    éventuellement passé\nlors d&#39;une requête.\nLorsque vous effectuez une recherche en texte intégral sur une colonne comportant un code GIN ou\nIndex GiST déjà pré-calculé (qui est commun au texte intégral\nrecherches), il peut être nécessaire de passer explicitement à un serveur PostgreSQL spécifique.\nregconfig    valeur pour assurer que le planificateur de requêtes utilise l&#39;index et\nne pas recalculer la colonne à la demande.\nAfin de permettre cette planification explicite des requêtes, ou d’utiliser différentes\nstratégies de recherche, la rencontre    méthode accepte un postgresql_regconfig\nargument de mot clé:"},{"id":"text-85","heading":"Text","content":"sélectionner([[[[ma table.c.identifiant]).où(\n    ma table.c.Titre.rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n)"},{"id":"text-86","heading":"Text","content":"Emet l&#39;équivalent de:"},{"id":"text-87","heading":"Text","content":"SÉLECTIONNER ma table.identifiant DE ma table\nOÙ ma table.Titre @@ to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)"},{"id":"text-88","heading":"Text","content":"On peut aussi spécifiquement passer dans un «Regconfig» valeur à la\nto_tsvector ()    commande comme argument initial:"},{"id":"text-89","heading":"Text","content":"sélectionner([[[[ma table.c.identifiant]).où(\n        func.to_tsvector(&#39;Anglais&#39;, ma table.c.Titre )\n        .rencontre(&#39;somestring&#39;, postgresql_regconfig=&#39;Anglais&#39;)\n    )"},{"id":"text-90","heading":"Text","content":"produit une déclaration équivalente à:"},{"id":"text-91","heading":"Text","content":"SÉLECTIONNER ma table.identifiant DE ma table\nOÙ to_tsvector(&#39;Anglais&#39;, ma table.Titre) @@\n    to_tsquery(&#39;Anglais&#39;, &#39;somestring&#39;)"},{"id":"text-92","heading":"Text","content":"Il est recommandé d’utiliser le EXPLIQUE ANALYSER...    outil de\nPostgreSQL ™ pour vous assurer que vous générez des requêtes avec SQLAlchemy qui\nTirez pleinement parti des index que vous avez éventuellement créés pour la recherche en texte intégral."},{"id":"text-93","heading":"Text","content":"DE SEULEMENT…\nLe dialecte prend en charge le mot clé ONLY de PostgreSQL pour ne cibler que certains\ntable dans une hiérarchie d&#39;héritage. Ceci peut être utilisé pour produire le\nSÉLECTIONNER ... DE SEULEMENT, MISE À JOUR SEULEMENT ..., et EFFACER DE SEULEMENT ...\nsyntaxes. Il utilise le mécanisme des astuces de SQLAlchemy:"},{"id":"text-94","heading":"Text","content":"# SELECTIONNER ... A PARTIR DE ...\nrésultat = table.sélectionner().avec_hint(table, &#39;SEULEMENT&#39;, &#39;postgresql&#39;)\nimpression résultat.fetchall()"},{"id":"text-95","heading":"Text","content":"# MISE À JOUR UNIQUEMENT ...\ntable.mise à jour(valeurs=dict(foo=&#39;bar&#39;)).avec_hint(&#39;SEULEMENT&#39;,\n                                               nom du dialecte=&#39;postgresql&#39;)"},{"id":"text-96","heading":"Text","content":"# SUPPRIMER DE SEULEMENT ...\ntable.effacer().avec_hint(&#39;SEULEMENT&#39;, nom du dialecte=&#39;postgresql&#39;)"},{"id":"text-97","heading":"Text","content":"Options d&#39;index spécifiques à PostgreSQL\nPlusieurs extensions à la Indice    construct sont disponibles, spécifiques\nau dialecte PostgreSQL."},{"id":"text-98","heading":"Text","content":"Index partiels\nLes index partiels ajoutent un critère à la définition de l’index afin que celui-ci soit\nappliqué à un sous-ensemble de lignes. Ceux-ci peuvent être spécifiés sur Indice\nen utilisant le postgresql_where    argument de mot clé:"},{"id":"text-99","heading":"Text","content":"Indice(&#39;mon_index&#39;, ma table.c.identifiant, postgresql_where=ma table.c.valeur &gt; dix)"},{"id":"text-100","heading":"Text","content":"Classes d&#39;opérateurs\nPostgreSQL permet la spécification d’un classe d&#39;opérateur pour chaque colonne de\nun index (voir\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).\nle Indice    la construction permet de les spécifier via le\npostgresql_ops    argument de mot clé:"},{"id":"text-101","heading":"Text","content":"Indice(\n    &#39;mon_index&#39;, ma table.c.identifiant, ma table.c.Les données,\n    postgresql_ops=\n        &#39;Les données&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )"},{"id":"text-102","heading":"Text","content":"Notez que les clés dans le postgresql_ops    dictionnaire sont le nom “clé” de\nle Colonnec&#39;est-à-dire le nom utilisé pour y accéder depuis le .c\ncollection de Table, qui peut être configuré pour être différent de\nle nom réel de la colonne tel qu&#39;il est exprimé dans la base de données.\nSi postgresql_ops    doit être utilisé contre une expression SQL complexe telle que\nen tant qu&#39;appel de fonction, pour l&#39;appliquer à la colonne, il faut lui attribuer une étiquette\nqui est identifié dans le dictionnaire par son nom, par exemple:"},{"id":"text-103","heading":"Text","content":"Indice(\n    &#39;mon_index&#39;, ma table.c.identifiant,\n    func.inférieur(ma table.c.Les données).étiquette(&#39;data_lower&#39;),\n    postgresql_ops=\n        &#39;data_lower&#39;: &#39;text_pattern_ops&#39;,\n        &#39;id&#39;: &#39;int4_ops&#39;\n    )"},{"id":"text-104","heading":"Text","content":"Types d&#39;index\nPostgreSQL fournit plusieurs types d’index: B-Tree, Hash, GiST et GIN, ainsi que\ncomme la possibilité pour les utilisateurs de créer leurs propres projets (voir\nhttp://www.postgresql.org/docs/8.3/static/indexes-types.html). Ceux-ci peuvent être\nspécifié sur Indice    en utilisant le postgresql_using    argument de mot clé:"},{"id":"text-105","heading":"Text","content":"Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_using=&#39;Gin&#39;)"},{"id":"text-106","heading":"Text","content":"La valeur transmise à l’argument du mot clé sera simplement transmise au\ncommande CREATE INDEX sous-jacente, de sorte doit être un type d&#39;index valide pour votre\nversion de PostgreSQL."},{"id":"text-107","heading":"Text","content":"Paramètres de stockage d&#39;index\nPostgreSQL permet de définir des paramètres de stockage sur des index. Le stockage\nles paramètres disponibles dépendent de la méthode d&#39;index utilisée par l&#39;index. Espace de rangement\nles paramètres peuvent être spécifiés sur Indice    en utilisant le postgresql_with\nargument de mot clé:"},{"id":"text-108","heading":"Text","content":"Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_with=&quot;facteur de remplissage&quot;: 50)"},{"id":"text-109","heading":"Text","content":"PostgreSQL permet de définir le tablespace dans lequel créer l&#39;index.\nLe tablespace peut être spécifié sur Indice    en utilisant le\npostgresql_tablespace    argument de mot clé:"},{"id":"text-110","heading":"Text","content":"Indice(&#39;mon_index&#39;, ma table.c.Les données, postgresql_tablespace=&#39;mon espace de tables&#39;)"},{"id":"text-111","heading":"Text","content":"Notez que la même option est disponible sur Table    ainsi que."},{"id":"text-112","heading":"Text","content":"Index avec concurremment\nL’option d’index CONCORDREMENT de PostgreSQL est supportée en passant le\ndrapeau postgresql_concurrently    à la Indice    construction:"},{"id":"text-113","heading":"Text","content":"tbl = Table(&#39;testtbl&#39;, m, Colonne(&#39;Les données&#39;, Entier))"},{"id":"text-114","heading":"Text","content":"idx1 = Indice(&#39;test_idx1&#39;, tbl.c.Les données, postgresql_concurrently=Vrai)"},{"id":"text-115","heading":"Text","content":"La construction d’index ci-dessus rendra le DDL pour CREATE INDEX, en supposant que\nPostgreSQL 8.2 ou supérieur est détecté ou pour un dialecte sans connexion, comme:"},{"id":"text-116","heading":"Text","content":"CRÉER INDICE De manière concurrente test_idx1 SUR testtbl (Les données)"},{"id":"text-117","heading":"Text","content":"Pour DROP INDEX, en supposant que PostgreSQL 9.2 ou supérieur soit détecté ou pour\nun dialecte sans connexion, il émettra:"},{"id":"text-118","heading":"Text","content":"LAISSEZ TOMBER INDICE De manière concurrente test_idx1"},{"id":"text-119","heading":"Text","content":"Nouveau dans la version 1.1: support pour concurremment sur DROP INDEX. le\nLe mot clé est simultanément émis uniquement si une version suffisamment élevée\nde PostgreSQL est détecté sur la connexion (ou pour une connexion sans connexion)\ndialecte)."},{"id":"text-120","heading":"Text","content":"Lors de l&#39;utilisation concurrente, la base de données PostgreSQL requiert que l&#39;instruction\nêtre appelé en dehors d&#39;un bloc de transaction. La base de données Python DBAPI\nmême pour une seule déclaration, une transaction est présente, donc pour utiliser cette\nle mode «autocommit» de DBAPI doit être utilisé:"},{"id":"text-121","heading":"Text","content":"métadonnées = MetaData()\ntable = Table(\n    &quot;foo&quot;, métadonnées,\n    Colonne(&quot;id&quot;, Chaîne))\nindice = Indice(\n    &quot;foo_idx&quot;, table.c.identifiant, postgresql_concurrently=Vrai)"},{"id":"text-122","heading":"Text","content":"avec moteur.relier() comme Connecticut:\n    avec Connecticut.execution_options(niveau_isolement=&#39;AUTOCOMMIT&#39;):\n        table.créer(Connecticut)"},{"id":"text-123","heading":"Text","content":"PostgreSQL Index Reflection\nLa base de données PostgreSQL crée implicitement un INDEX UNIQUE chaque fois que le\nLa construction UNIQUE CONSTRAINT est utilisée. Lors de l&#39;inspection d&#39;une table en utilisant\nInspecteur, le Inspector.get_indexes ()\net le Inspector.get_unique_constraints ()    fera rapport sur ces\ndeux constructions distinctement; dans le cas de l&#39;index, la clé\nduplicates_constraint    sera présent dans l&#39;entrée d&#39;index s&#39;il est\ndétecté comme reflétant une contrainte. Lors de la réflexion en utilisant\nTable(..., autoload = True), l&#39;INDICE UNIQUE est ne pas revenu\ndans Table.indexes    quand il est détecté comme reflétant un\nContrainte unique    dans le Table.constraints    collection."},{"id":"text-124","heading":"Text","content":"Modifié dans la version 1.0.0: &#8211; Table    la réflexion comprend maintenant\nContrainte unique    objets présents dans le Table.constraints\ncollection; le backend de PostgreSQL n&#39;inclura plus de “miroir”\nIndice    construire dans Table.indexes    si c&#39;est détecté\ncomme correspondant à une contrainte unique."},{"id":"text-125","heading":"Text","content":"Options de réflexion spéciales\nle Inspecteur    utilisé pour le backend PostgreSQL est une instance\nde PGInspector, qui offre des méthodes supplémentaires:"},{"id":"text-126","heading":"Text","content":"de sqlalchemy importation create_engine, inspecter"},{"id":"text-127","heading":"Text","content":"moteur = create_engine(&quot;postgresql + psycopg2: // localhost / test&quot;)\ninsp = inspecter(moteur)  # sera un PGInspector"},{"id":"text-128","heading":"Text","content":"impression(insp.get_enums())"},{"id":"text-129","heading":"Text","content":"classe sqlalchemy.dialects.postgresql.base.PGInspector(Connecticut)"},{"id":"text-130","heading":"Text","content":"Bases: sqlalchemy.engine.reflection.Inspector"},{"id":"text-131","heading":"Text","content":"get_enums(schéma = Aucun)"},{"id":"text-132","heading":"Text","content":"Retourne une liste d&#39;objets ENUM.\nChaque membre est un dictionnaire contenant ces champs:"},{"id":"text-133","heading":"Text","content":"name &#8211; nom de l&#39;énum"},{"id":"text-134","heading":"Text","content":"schéma &#8211; le nom du schéma pour l&#39;énumération."},{"id":"text-135","heading":"Text","content":"visible &#8211; booléen, que cette énumération soit visible ou non\ndans le chemin de recherche par défaut."},{"id":"text-136","heading":"Text","content":"étiquettes &#8211; une liste d&#39;étiquettes de chaîne qui s&#39;appliquent à l&#39;énumération."},{"id":"text-137","heading":"Text","content":"Paramètres"},{"id":"text-138","heading":"Text","content":"schéma &#8211; nom du schéma. Si aucun, le schéma par défaut\n(généralement «public») est utilisé. Peut également être réglé sur &#39;*&#39; pour\nindiquez des énumérations de charge pour tous les schémas."},{"id":"text-139","heading":"Text","content":"get_foreign_table_names(schéma = Aucun)"},{"id":"text-140","heading":"Text","content":"Renvoie une liste de noms FOREIGN TABLE.\nLe comportement est similaire à celui de Inspector.get_table_names (),\nsauf que la liste est limitée aux tables qui signalent une\nrelâchement    valeur de F."},{"id":"text-141","heading":"Text","content":"get_table_oid(nom de la table, schéma = Aucun)"},{"id":"text-142","heading":"Text","content":"Renvoie l&#39;OID du nom de la table donnée."},{"id":"text-143","heading":"Text","content":"get_view_names(schéma = Aucun, include = (&#39;plain&#39;, &#39;matérialisé&#39;))"},{"id":"text-144","heading":"Text","content":"Renvoyer tous les noms de vue dans schéma."},{"id":"text-145","heading":"Text","content":"Paramètres"},{"id":"text-146","heading":"Text","content":"schéma &#8211; Facultatif, récupérez les noms d&#39;un schéma autre que celui par défaut.\nPour les devis spéciaux, utilisez quoted_name."},{"id":"text-147","heading":"Text","content":"comprendre &#8211; \nspécifier les types de vues à renvoyer. Passé\nsous forme de valeur de chaîne (pour un type unique) ou de tuple (pour un nombre quelconque)\nde types). Par défaut à (&#39;plaine&#39;, &#39;matérialisé&#39;)."},{"id":"text-148","heading":"Text","content":"Options de la table PostgreSQL\nPlusieurs options pour CREATE TABLE sont supportées directement par PostgreSQL\ndialecte en conjonction avec le Table    construction:"},{"id":"text-149","heading":"Text","content":"Types de tableau\nLe dialecte PostgreSQL supporte les tableaux, à la fois en tant que types de colonne multidimensionnels\nainsi que des littéraux de tableau:"},{"id":"text-150","heading":"Text","content":"Types JSON\nLe dialecte PostgreSQL prend en charge les types de données JSON et JSONB, y compris\nLe support natif de psycopg2 et celui de tous les logiciels spéciaux de PostgreSQL\nles opérateurs:"},{"id":"text-151","heading":"Text","content":"Type HSTORE\nLe type HSTORE PostgreSQL ainsi que les littéraux hstore sont pris en charge:"},{"id":"text-152","heading":"Text","content":"Types ENUM\nPostgreSQL a une structure TYPE pouvant être créée indépendamment qui est utilisée\npour implémenter un type énuméré. Cette approche introduit des\nla complexité du côté SQLAlchemy en termes de quand ce type devrait être\nCréé et abandonné. Le type object est aussi un reflet indépendant\nentité. Les sections suivantes doivent être consultées:"},{"id":"text-153","heading":"Text","content":"Utiliser ENUM avec ARRAY\nLa combinaison de ENUM et ARRAY n’est pas directement prise en charge par le backend\nDBAPIs à ce moment. Pour envoyer et recevoir un ARRAY of ENUM,\nutilisez le type de solution de contournement suivant, qui décore le\npostgresql.ARRAY    Type de données."},{"id":"text-154","heading":"Text","content":"de sqlalchemy importation TypeDécorateur\nde sqlalchemy.dialects.postgresql importation Tableau"},{"id":"text-155","heading":"Text","content":"classe ArrayOfEnum(TypeDécorateur):\n    impl = Tableau"},{"id":"text-156","heading":"Text","content":"def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)"},{"id":"text-157","heading":"Text","content":"def result_processor(soi, dialecte, coltype):\n        super_rp = super(ArrayOfEnum, soi).result_processor(\n            dialecte, coltype)"},{"id":"text-158","heading":"Text","content":"def handle_raw_string(valeur):\n            interne = ré.rencontre(r&quot;^ (. *) $&quot;, valeur).groupe(1)\n            revenir interne.Divisé(&quot;,&quot;) si interne autre []"},{"id":"text-159","heading":"Text","content":"def processus(valeur):\n            si valeur est Aucun:\n                revenir Aucun\n            revenir super_rp(handle_raw_string(valeur))\n        revenir processus"},{"id":"text-160","heading":"Text","content":"Par exemple.:"},{"id":"text-161","heading":"Text","content":"Table(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, ArrayOfEnum(ENUM(&#39;une&#39;, &#39;b&#39;c&#39;, nom =&#39;myenum&#39;)))"},{"id":"text-162","heading":"Text","content":")"},{"id":"text-163","heading":"Text","content":"Ce type n&#39;est pas inclus en tant que type intégré car il serait incompatible\navec une DBAPI qui décide soudainement de soutenir ARRAY of ENUM directement dans\nune nouvelle version."},{"id":"text-164","heading":"Text","content":"Utilisation de JSON / JSONB avec ARRAY\nSemblable à utiliser ENUM, pour un ARRAY of JSON / JSONB, nous devons rendre le\nCAST approprié, cependant les pilotes psycopg2 actuels semblent gérer le résultat\npour ARRAY of JSON automatiquement, le type est donc plus simple:"},{"id":"text-165","heading":"Text","content":"classe CastingArray(Tableau):\n    def bind_expression(soi, bindvalue):\n        revenir sa.jeter(bindvalue, soi)"},{"id":"text-166","heading":"Text","content":"Par exemple.:"},{"id":"text-167","heading":"Text","content":"Table(\n    &#39;mes données&#39;, métadonnées,\n    Colonne(&#39;id&#39;, Entier, clé primaire=Vrai),\n    Colonne(&#39;Les données&#39;, CastingArray(JSONB))\n)"},{"id":"text-168","heading":"Text","content":"Types de données PostgreSQL\nComme avec tous les dialectes SQLAlchemy, tous les types UPPERCASE connus pour être\nvalables avec PostgreSQL sont importables à partir du dialecte de niveau supérieur, que ce soit\nils proviennent de sqlalchemy.types    ou du dialecte local:"},{"id":"text-169","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation \n    Tableau, BIGINT, BIT, BOOLÉAN, BYTEA, CARBONISER, CIDR, DATE, \n    DOUBLE PRECISION, ENUM, FLOTTE, HSTORE, INET, ENTIER, \n    INTERVALLE, JSON, JSONB, MACADDR, ARGENT, NUMERIC, OID, REAL, SMALLINT, TEXT, \n    TEMPS, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \n    DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR"},{"id":"text-170","heading":"Text","content":"Types which are specific to PostgreSQL, or have PostgreSQL-specific\nconstruction arguments, are as follows:"},{"id":"text-171","heading":"Text","content":"class sqlalchemy.dialects.postgresql.aggregate_order_by(cible, *order_by)"},{"id":"text-172","heading":"Text","content":"Bases: sqlalchemy.sql.expression.ColumnElement\nRepresent a PostgreSQL aggregate order by expression.\nE.g.:"},{"id":"text-173","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation aggregate_order_by\nexpr = func.array_agg(aggregate_order_by(table.c.une, table.c.b.desc()))\nstmt = sélectionner([[[[expr])"},{"id":"text-174","heading":"Text","content":"would represent the expression:"},{"id":"text-175","heading":"Text","content":"SELECT array_agg(une ORDER BY b DESC) FROM table;"},{"id":"text-176","heading":"Text","content":"Similarly:"},{"id":"text-177","heading":"Text","content":"expr = func.string_agg(\n    table.c.une,\n    aggregate_order_by(literal_column(&quot;&#39;,&#39;&quot;), table.c.une)\n)\nstmt = sélectionner([[[[expr])"},{"id":"text-178","heading":"Text","content":"Would represent:"},{"id":"text-179","heading":"Text","content":"SELECT string_agg(une, &#39;,&#39; ORDER BY une) FROM table;"},{"id":"text-180","heading":"Text","content":"Changed in version 1.2.13: &#8211; the ORDER BY argument may be multiple terms"},{"id":"text-181","heading":"Text","content":"class sqlalchemy.dialects.postgresql.array(clauses, **kw)"},{"id":"text-182","heading":"Text","content":"Bases: sqlalchemy.sql.expression.Tuple\nA PostgreSQL ARRAY literal.\nThis is used to produce ARRAY literals in SQL expressions, e.g.:"},{"id":"text-183","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation array\nde sqlalchemy.dialects importation postgresql\nde sqlalchemy importation sélectionner, func"},{"id":"text-184","heading":"Text","content":"stmt = sélectionner([[[[\n                array([[[[1,2]) + array([[[[3,4,5])\n            ])"},{"id":"text-185","heading":"Text","content":"impression(stmt.compile(dialect=postgresql.dialect()))"},{"id":"text-186","heading":"Text","content":"Produces the SQL:"},{"id":"text-187","heading":"Text","content":"SELECT ARRAY[[[[%(param_1)s, %(param_2)s] ||\n    ARRAY[[[[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1"},{"id":"text-188","heading":"Text","content":"An instance of array    will always have the datatype\nARRAY. The “inner” type of the array is inferred from\nthe values present, unless the type_    keyword argument is passed:"},{"id":"text-189","heading":"Text","content":"array([[[[&#39;foo&#39;, &#39;bar&#39;], type_=CHAR)"},{"id":"text-190","heading":"Text","content":"Multidimensional arrays are produced by nesting array    constructs.\nThe dimensionality of the final ARRAY    type is calculated by\nrecursively adding the dimensions of the inner ARRAY    type:"},{"id":"text-191","heading":"Text","content":"stmt = sélectionner([[[[\n    array([[[[\n        array([[[[1, 2]), array([[[[3, 4]), array([[[[colonne(&#39;q&#39;), colonne(&#39;x&#39;)])\n    ])\n])\nimpression(stmt.compile(dialect=postgresql.dialect()))"},{"id":"text-192","heading":"Text","content":"Produces:"},{"id":"text-193","heading":"Text","content":"SELECT ARRAY[[[[ARRAY[[[[%(param_1)s, %(param_2)s],\nARRAY[[[[%(param_3)s, %(param_4)s], ARRAY[[[[q, x]] AS anon_1"},{"id":"text-194","heading":"Text","content":"New in version 1.3.6: added support for multidimensional array literals"},{"id":"text-195","heading":"Text","content":"class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)"},{"id":"text-196","heading":"Text","content":"Bases: sqlalchemy.types.ARRAY\nPostgreSQL ARRAY type.\nle postgresql.ARRAY    type is constructed in the same way\nas the core types.ARRAY    type; a member type is required, and a\nnumber of dimensions is recommended if the type is to be used for more\nthan one dimension:"},{"id":"text-197","heading":"Text","content":"de sqlalchemy.dialects importation postgresql"},{"id":"text-198","heading":"Text","content":"mytable = Table(&quot;mytable&quot;, metadata,\n        Column(&quot;data&quot;, postgresql.ARRAY(Integer, dimensions=2))\n    )"},{"id":"text-199","heading":"Text","content":"le postgresql.ARRAY    type provides all operations defined on the\ncore types.ARRAY    type, including support for “dimensions”,\nindexed access, and simple matching such as\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all().  postgresql.ARRAY    class also\nprovides PostgreSQL-specific methods for containment operations, including\npostgresql.ARRAY.Comparator.contains()\npostgresql.ARRAY.Comparator.contained_by(), et\npostgresql.ARRAY.Comparator.overlap(), e.g.:"},{"id":"text-200","heading":"Text","content":"mytable.c.Les données.contient([[[[1, 2])"},{"id":"text-201","heading":"Text","content":"le postgresql.ARRAY    type may not be supported on all\nPostgreSQL DBAPIs; it is currently known to work on psycopg2 only.\nDe plus, le postgresql.ARRAY    type does not work directly in\nconjunction with the ENUM    type.  For a workaround, see the\nspecial type at Using ENUM with ARRAY."},{"id":"text-202","heading":"Text","content":"class Comparator(expr)"},{"id":"text-203","heading":"Text","content":"Bases: sqlalchemy.types.Comparator\nDefine comparison operations for ARRAY.\nNote that these operations are in addition to those provided\nby the base types.ARRAY.Comparator    class, including\ntypes.ARRAY.Comparator.any()    et\ntypes.ARRAY.Comparator.all()."},{"id":"text-204","heading":"Text","content":"contained_by(other)"},{"id":"text-205","heading":"Text","content":"Boolean expression.  Test if elements are a proper subset of the\nelements of the argument array expression."},{"id":"text-206","heading":"Text","content":"contient(other, **kwargs)"},{"id":"text-207","heading":"Text","content":"Boolean expression.  Test if elements are a superset of the\nelements of the argument array expression."},{"id":"text-208","heading":"Text","content":"overlap(other)"},{"id":"text-209","heading":"Text","content":"Boolean expression.  Test if array has elements in common with\nan argument array expression."},{"id":"text-210","heading":"Text","content":"__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)"},{"id":"text-211","heading":"Text","content":"Construct an ARRAY.\nE.g.:"},{"id":"text-212","heading":"Text","content":"Column(&#39;myarray&#39;, ARRAY(Integer))"},{"id":"text-213","heading":"Text","content":"Arguments are:"},{"id":"text-214","heading":"Text","content":"Paramètres"},{"id":"text-215","heading":"Text","content":"item_type – The data type of items of this array. Note that\ndimensionality is irrelevant here, so multi-dimensional arrays like\nINTEGER[][], are constructed as ARRAY(Integer), not as\nARRAY(ARRAY(Integer))    or such."},{"id":"text-216","heading":"Text","content":"as_tuple=False – Specify whether return results\nshould be converted to tuples from lists. DBAPIs such\nas psycopg2 return lists by default. When tuples are\nreturned, the results are hashable."},{"id":"text-217","heading":"Text","content":"dimensions – if non-None, the ARRAY will assume a fixed\nnumber of dimensions.  This will cause the DDL emitted for this\nARRAY to include the exact number of bracket clauses [],\nand will also optimize the performance of the type overall.\nNote that PG arrays are always implicitly “non-dimensioned”,\nmeaning they can store any number of dimensions no matter how\nthey were declared."},{"id":"text-218","heading":"Text","content":"zero_indexes=False &#8211; \nwhen True, index values will be converted\nbetween Python zero-based and PostgreSQL one-based indexes, e.g.\na value of one will be added to all index values before passing\nto the database."},{"id":"text-219","heading":"Text","content":"sqlalchemy.dialects.postgresql.array_agg(*arg, **kw)"},{"id":"text-220","heading":"Text","content":"PostgreSQL-specific form of array_agg, ensures\nreturn type is postgresql.ARRAY    and not\nthe plain types.ARRAY, unless an explicit type_\nis passed."},{"id":"text-221","heading":"Text","content":"sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=)"},{"id":"text-222","heading":"Text","content":"A synonym for the ARRAY.Comparator.any()    method.\nThis method is legacy and is here for backwards-compatibility."},{"id":"text-223","heading":"Text","content":"sqlalchemy.dialects.postgresql.Tout(other, arrexpr, operator=)"},{"id":"text-224","heading":"Text","content":"A synonym for the ARRAY.Comparator.all()    method.\nThis method is legacy and is here for backwards-compatibility."},{"id":"text-225","heading":"Text","content":"class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)"},{"id":"text-226","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine"},{"id":"text-227","heading":"Text","content":"class sqlalchemy.dialects.postgresql.BYTEA(length=None)"},{"id":"text-228","heading":"Text","content":"Bases: sqlalchemy.types.LargeBinary"},{"id":"text-229","heading":"Text","content":"__init__(length=None)"},{"id":"text-230","heading":"Text","content":"Construct a LargeBinary type."},{"id":"text-231","heading":"Text","content":"Paramètres"},{"id":"text-232","heading":"Text","content":"length – optional, a length for the column for use in\nDDL statements, for those binary types that accept a length,\nsuch as the MySQL BLOB type."},{"id":"text-233","heading":"Text","content":"class sqlalchemy.dialects.postgresql.CIDR"},{"id":"text-234","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine"},{"id":"text-235","heading":"Text","content":"class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None)"},{"id":"text-236","heading":"Text","content":"Bases: sqlalchemy.types.Float"},{"id":"text-237","heading":"Text","content":"__init__(precision=None, asdecimal=False, decimal_return_scale=None)"},{"id":"text-238","heading":"Text","content":"Construct a Float."},{"id":"text-239","heading":"Text","content":"Paramètres"},{"id":"text-240","heading":"Text","content":"précision – the numeric precision for use in DDL CREATE\nTABLE."},{"id":"text-241","heading":"Text","content":"asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion."},{"id":"text-242","heading":"Text","content":"decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified."},{"id":"text-243","heading":"Text","content":"class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)"},{"id":"text-244","heading":"Text","content":"Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum\nPostgreSQL ENUM type.\nThis is a subclass of types.Enum    which includes\nsupport for PG’s CREATE TYPE    et DROP TYPE.\nWhen the builtin type types.Enum    is used and the\nEnum.native_enum    flag is left at its default of\nTrue, the PostgreSQL backend will use a postgresql.ENUM\ntype as the implementation, so the special create/drop rules\nwill be used.\nThe create/drop behavior of ENUM is necessarily intricate, due to the\nawkward relationship the ENUM type has in relationship to the\nparent table, in that it may be “owned” by just a single table, or\nmay be shared among many tables.\nWhen using types.Enum    ou postgresql.ENUM\nin an “inline” fashion, the CREATE TYPE    et DROP TYPE    is emitted\ncorresponding to when the Table.create()    et Table.drop()\nmethods are called:"},{"id":"text-245","heading":"Text","content":"table = Table(&#39;sometable&#39;, metadata,\n    Column(&#39;some_enum&#39;, ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;))\n)"},{"id":"text-246","heading":"Text","content":"table.create(engine)  # will emit CREATE ENUM and CREATE TABLE\ntable.drop(engine)  # will emit DROP TABLE and DROP ENUM"},{"id":"text-247","heading":"Text","content":"To use a common enumerated type between multiple tables, the best\npractice is to declare the types.Enum    ou\npostgresql.ENUM    independently, and associate it with the\nMetaData    object itself:"},{"id":"text-248","heading":"Text","content":"my_enum = ENUM(&#39;a&#39;, &#39;b&#39;, &#39;c&#39;, name=&#39;myenum&#39;, metadata=metadata)"},{"id":"text-249","heading":"Text","content":"t1 = Table(&#39;sometable_one&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)"},{"id":"text-250","heading":"Text","content":"t2 = Table(&#39;sometable_two&#39;, metadata,\n    Column(&#39;some_enum&#39;, myenum)\n)"},{"id":"text-251","heading":"Text","content":"When this pattern is used, care must still be taken at the level\nof individual table creates.  Emitting CREATE TABLE without also\nspecifying checkfirst=True    will still cause issues:"},{"id":"text-252","heading":"Text","content":"t1.create(engine) # will fail: no such type &#39;myenum&#39;"},{"id":"text-253","heading":"Text","content":"If we specify checkfirst=True, the individual table-level create\noperation will check for the ENUM    and create if not exists:"},{"id":"text-254","heading":"Text","content":"# will check if enum exists, and emit CREATE TYPE if not\nt1.create(engine, checkfirst=True)"},{"id":"text-255","heading":"Text","content":"When using a metadata-level ENUM type, the type will always be created\nand dropped if either the metadata-wide create/drop is called:"},{"id":"text-256","heading":"Text","content":"metadata.create_all(engine)  # will emit CREATE TYPE\nmetadata.drop_all(engine)  # will emit DROP TYPE"},{"id":"text-257","heading":"Text","content":"The type can also be created and dropped directly:"},{"id":"text-258","heading":"Text","content":"my_enum.create(engine)\nmy_enum.drop(engine)"},{"id":"text-259","heading":"Text","content":"Changed in version 1.0.0: The PostgreSQL postgresql.ENUM    type\nnow behaves more strictly with regards to CREATE/DROP.  A metadata-level\nENUM type will only be created and dropped at the metadata level,\nnot the table level, with the exception of\ntable.create(checkfirst=True).\nle table.drop()    call will now emit a DROP TYPE for a table-level\nenumerated type."},{"id":"text-260","heading":"Text","content":"__init__(*enums, **kw)"},{"id":"text-261","heading":"Text","content":"Construct an ENUM.\nArguments are the same as that of\ntypes.Enum, but also including\nthe following parameters."},{"id":"text-262","heading":"Text","content":"Paramètres"},{"id":"text-263","heading":"Text","content":"create_type – Defaults to True.\nIndicates that CREATE TYPE    should be\nemitted, after optionally checking for the\npresence of the type, when the parent\ntable is being created; and additionally\ncette DROP TYPE    is called when the table\nis dropped. Quand Faux, no check\nwill be performed and no CREATE TYPE\nou DROP TYPE    is emitted, unless\ncreate()\nou drop()\nare called directly.\nSetting to Faux    is helpful\nwhen invoking a creation scheme to a SQL file\nwithout access to the actual database &#8211;\nle create()    et\ndrop()    methods can\nbe used to emit SQL to a target bind."},{"id":"text-264","heading":"Text","content":"create(bind=None, checkfirst=True)"},{"id":"text-265","heading":"Text","content":"Émettre CREATE TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL CREATE TYPE, no action is taken."},{"id":"text-266","heading":"Text","content":"Paramètres"},{"id":"text-267","heading":"Text","content":"bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL."},{"id":"text-268","heading":"Text","content":"checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type does not exist already before\ncreating."},{"id":"text-269","heading":"Text","content":"drop(bind=None, checkfirst=True)"},{"id":"text-270","heading":"Text","content":"Émettre DROP TYPE    for this\nENUM.\nIf the underlying dialect does not support\nPostgreSQL DROP TYPE, no action is taken."},{"id":"text-271","heading":"Text","content":"Paramètres"},{"id":"text-272","heading":"Text","content":"bind – a connectable Moteur,\nConnection, or similar object to emit\nSQL."},{"id":"text-273","heading":"Text","content":"checkfirst – if True, a query against\nthe PG catalog will be first performed to see\nif the type actually exists before dropping."},{"id":"text-274","heading":"Text","content":"class sqlalchemy.dialects.postgresql.HSTORE(text_type=None)"},{"id":"text-275","heading":"Text","content":"Bases: sqlalchemy.types.Indexable, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL HSTORE type.\nle HSTORE    type stores dictionaries containing strings, e.g.:"},{"id":"text-276","heading":"Text","content":"data_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, HSTORE)\n)"},{"id":"text-277","heading":"Text","content":"avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )"},{"id":"text-278","heading":"Text","content":"HSTORE    provides for a wide range of operations, including:"},{"id":"text-279","heading":"Text","content":"Index operations:"},{"id":"text-280","heading":"Text","content":"data_table.c.Les données[[[[&#39;some key&#39;] == &#39;some value&#39;"},{"id":"text-281","heading":"Text","content":"Containment operations:"},{"id":"text-282","heading":"Text","content":"data_table.c.Les données.has_key(&#39;some key&#39;)"},{"id":"text-283","heading":"Text","content":"data_table.c.Les données.has_all([[[[&#39;one&#39;, &#39;two&#39;, &#39;three&#39;])"},{"id":"text-284","heading":"Text","content":"Concatenation:"},{"id":"text-285","heading":"Text","content":"data_table.c.Les données + &quot;k1&quot;: &quot;v1&quot;"},{"id":"text-286","heading":"Text","content":"For a full list of special methods see\nHSTORE.comparator_factory.\nFor usage with the SQLAlchemy ORM, it may be desirable to combine\nthe usage of HSTORE    avec MutableDict    dictionary\nnow part of the sqlalchemy.ext.mutable\nextension.  This extension will allow “in-place” changes to the\ndictionary, e.g. addition of new keys or replacement/removal of existing\nkeys to/from the current dictionary, to produce events which will be\ndetected by the unit of work:"},{"id":"text-287","heading":"Text","content":"de sqlalchemy.ext.mutable importation MutableDict"},{"id":"text-288","heading":"Text","content":"class MyClass(Base):\n    __tablename__ = &#39;data_table&#39;"},{"id":"text-289","heading":"Text","content":"identifiant = Column(Integer, primary_key=True)\n    Les données = Column(MutableDict.as_mutable(HSTORE))"},{"id":"text-290","heading":"Text","content":"my_object = session.query(MyClass).un()"},{"id":"text-291","heading":"Text","content":"# in-place mutation, requires Mutable extension\n# in order for the ORM to detect\nmy_object.Les données[[[[&#39;some_key&#39;] = &#39;some value&#39;"},{"id":"text-292","heading":"Text","content":"session.commit()"},{"id":"text-293","heading":"Text","content":"When the sqlalchemy.ext.mutable    extension is not used, the ORM\nwill not be alerted to any changes to the contents of an existing\ndictionary, unless that dictionary value is re-assigned to the\nHSTORE-attribute itself, thus generating a change event."},{"id":"text-294","heading":"Text","content":"Voir également\nhstore    &#8211; render the PostgreSQL hstore()    une fonction."},{"id":"text-295","heading":"Text","content":"class Comparator(expr)"},{"id":"text-296","heading":"Text","content":"Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator\nDefine comparison operations for HSTORE."},{"id":"text-297","heading":"Text","content":"array()"},{"id":"text-298","heading":"Text","content":"Text array expression.  Returns array of alternating keys and\nvaleurs."},{"id":"text-299","heading":"Text","content":"contained_by(other)"},{"id":"text-300","heading":"Text","content":"Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression."},{"id":"text-301","heading":"Text","content":"contient(other, **kwargs)"},{"id":"text-302","heading":"Text","content":"Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression."},{"id":"text-303","heading":"Text","content":"defined(clé)"},{"id":"text-304","heading":"Text","content":"Boolean expression.  Test for presence of a non-NULL value for\nthe key.  Note that the key may be a SQLA expression."},{"id":"text-305","heading":"Text","content":"effacer(clé)"},{"id":"text-306","heading":"Text","content":"HStore expression.  Returns the contents of this hstore with the\ngiven key deleted.  Note that the key may be a SQLA expression."},{"id":"text-307","heading":"Text","content":"has_all(other)"},{"id":"text-308","heading":"Text","content":"Boolean expression.  Test for presence of all keys in jsonb"},{"id":"text-309","heading":"Text","content":"has_any(other)"},{"id":"text-310","heading":"Text","content":"Boolean expression.  Test for presence of any key in jsonb"},{"id":"text-311","heading":"Text","content":"has_key(other)"},{"id":"text-312","heading":"Text","content":"Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression."},{"id":"text-313","heading":"Text","content":"keys()"},{"id":"text-314","heading":"Text","content":"Text array expression.  Returns array of keys."},{"id":"text-315","heading":"Text","content":"matrix()"},{"id":"text-316","heading":"Text","content":"Text array expression.  Returns array of [key, value] pairs."},{"id":"text-317","heading":"Text","content":"slice(array)"},{"id":"text-318","heading":"Text","content":"HStore expression.  Returns a subset of an hstore defined by\narray of keys."},{"id":"text-319","heading":"Text","content":"vals()"},{"id":"text-320","heading":"Text","content":"Text array expression.  Returns array of values."},{"id":"text-321","heading":"Text","content":"__init__(text_type=None)"},{"id":"text-322","heading":"Text","content":"Construct a new HSTORE."},{"id":"text-323","heading":"Text","content":"Paramètres"},{"id":"text-324","heading":"Text","content":"text_type &#8211; \nthe type that should be used for indexed values.\nDefaults to types.Text."},{"id":"text-325","heading":"Text","content":"bind_processor(dialect)"},{"id":"text-326","heading":"Text","content":"Return a conversion function for processing bind values.\nReturns a callable which will receive a bind parameter value\nas the sole positional argument and will return a value to\nsend to the DB-API.\nIf processing is not necessary, the method should return None."},{"id":"text-327","heading":"Text","content":"Paramètres"},{"id":"text-328","heading":"Text","content":"dialect – Dialect instance in use."},{"id":"text-329","heading":"Text","content":"comparator_factory"},{"id":"text-330","heading":"Text","content":"alias of HSTORE.Comparator"},{"id":"text-331","heading":"Text","content":"result_processor(dialect, coltype)"},{"id":"text-332","heading":"Text","content":"Return a conversion function for processing result row values.\nReturns a callable which will receive a result row column\nvalue as the sole positional argument and will return a value\nto return to the user.\nIf processing is not necessary, the method should return None."},{"id":"text-333","heading":"Text","content":"Paramètres"},{"id":"text-334","heading":"Text","content":"class sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)"},{"id":"text-335","heading":"Text","content":"Bases: sqlalchemy.sql.functions.GenericFunction\nConstruct an hstore value within a SQL expression using the\nPostgreSQL hstore()    une fonction.\nle hstore    function accepts one or two arguments as described\nin the PostgreSQL documentation.\nE.g.:"},{"id":"text-336","heading":"Text","content":"de sqlalchemy.dialects.postgresql importation array, hstore"},{"id":"text-337","heading":"Text","content":"sélectionner([[[[hstore(&#39;key1&#39;, &#39;value1&#39;)])"},{"id":"text-338","heading":"Text","content":"sélectionner([[[[\n        hstore(\n            array([[[[&#39;key1&#39;, &#39;key2&#39;, &#39;key3&#39;]),\n            array([[[[&#39;value1&#39;, &#39;value2&#39;, &#39;value3&#39;])\n        )\n    ])"},{"id":"text-339","heading":"Text","content":"Voir également\nHSTORE    &#8211; the PostgreSQL HSTORE    datatype."},{"id":"text-340","heading":"Text","content":"type"},{"id":"text-341","heading":"Text","content":"alias of HSTORE"},{"id":"text-342","heading":"Text","content":"class sqlalchemy.dialects.postgresql.INET"},{"id":"text-343","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine"},{"id":"text-344","heading":"Text","content":"class sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)"},{"id":"text-345","heading":"Text","content":"Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval\nPostgreSQL INTERVAL type.\nThe INTERVAL type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000 or zxjdbc."},{"id":"text-346","heading":"Text","content":"__init__(precision=None, fields=None)"},{"id":"text-347","heading":"Text","content":"Construct an INTERVAL."},{"id":"text-348","heading":"Text","content":"Paramètres"},{"id":"text-349","heading":"Text","content":"précision – optional integer precision value"},{"id":"text-350","heading":"Text","content":"fields &#8211; \nstring fields specifier.  allows storage of fields\nto be limited, such as &quot;YEAR&quot;, &quot;MONTH&quot;, &quot;DAY TO HOUR&quot;,\netc."},{"id":"text-351","heading":"Text","content":"class sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)"},{"id":"text-352","heading":"Text","content":"Bases: sqlalchemy.types.JSON\nRepresent the PostgreSQL JSON type.\nThis type is a specialization of the Core-level types.JSON\ntype.   Be sure to read the documentation for types.JSON    for\nimportant tips regarding treatment of NULL values and ORM use.\nThe operators provided by the PostgreSQL version of JSON\ninclude:"},{"id":"text-353","heading":"Text","content":"Index operations (the -&gt;    operator):"},{"id":"text-354","heading":"Text","content":"data_table.c.Les données[[[[&#39;some key&#39;]"},{"id":"text-355","heading":"Text","content":"data_table.c.Les données[[[[5]"},{"id":"text-356","heading":"Text","content":"Index operations returning text (the -&gt;&gt;    operator):"},{"id":"text-357","heading":"Text","content":"data_table.c.Les données[[[[&#39;some key&#39;].astext == &#39;some value&#39;"},{"id":"text-358","heading":"Text","content":"Index operations with CAST\n(equivalent to CAST(col -&gt;&gt; [&#39;some[&#39;some['some['some key&#39;] AS )):"},{"id":"text-359","heading":"Text","content":"data_table.c.Les données[[[[&#39;some key&#39;].astext.jeter(Integer) == 5"},{"id":"text-360","heading":"Text","content":"Path index operations (the #&gt;    operator):"},{"id":"text-361","heading":"Text","content":"data_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)]"},{"id":"text-362","heading":"Text","content":"Path index operations returning text (the #&gt;&gt;    operator):"},{"id":"text-363","heading":"Text","content":"data_table.c.Les données[([([([(&#39;key_1&#39;, &#39;key_2&#39;, 5, ..., &#39;key_n&#39;)].astext == &#39;some value&#39;"},{"id":"text-364","heading":"Text","content":"Changed in version 1.1: le ColumnElement.cast()    operator on\nJSON objects now requires that the JSON.Comparator.astext\nmodifier be called explicitly, if the cast works only from a textual\nstring."},{"id":"text-365","heading":"Text","content":"Index operations return an expression object whose type defaults to\nJSON    by default, so that further JSON-oriented instructions\nmay be called upon the result type.\nCustom serializers and deserializers are specified at the dialect level,\nthat is using create_engine(). The reason for this is that when\nusing psycopg2, the DBAPI only allows serializers at the per-cursor\nor per-connection level.   E.g.:"},{"id":"text-366","heading":"Text","content":"engine = create_engine(&quot;postgresql://scott:tiger@localhost/test&quot;,\n                        json_serializer=my_serialize_fn,\n                        json_deserializer=my_deserialize_fn\n                )"},{"id":"text-367","heading":"Text","content":"When using the psycopg2 dialect, the json_deserializer is registered\nagainst the database using psycopg2.extras.register_default_json."},{"id":"text-368","heading":"Text","content":"class Comparator(expr)"},{"id":"text-369","heading":"Text","content":"Bases: sqlalchemy.types.Comparator\nDefine comparison operations for JSON."},{"id":"text-370","heading":"Text","content":"property astext"},{"id":"text-371","heading":"Text","content":"On an indexed expression, use the “astext” (e.g. “-&gt;&gt;”)\nconversion when rendered in SQL.\nE.g.:"},{"id":"text-372","heading":"Text","content":"sélectionner([[[[data_table.c.Les données[[[[&#39;some key&#39;].astext])"},{"id":"text-373","heading":"Text","content":"__init__(none_as_null=False, astext_type=None)"},{"id":"text-374","heading":"Text","content":"Construct a JSON    type."},{"id":"text-375","heading":"Text","content":"Paramètres"},{"id":"text-376","heading":"Text","content":"none_as_null &#8211; \nif True, persist the value None    as a\nSQL NULL value, not the JSON encoding of nul. Note that\nwhen this flag is False, the null()    construct can still\nbe used to persist a NULL value:"},{"id":"text-377","heading":"Text","content":"de sqlalchemy importation nul\nconn.execute(table.insérer(), Les données=nul())"},{"id":"text-378","heading":"Text","content":"Changed in version 0.9.8: &#8211; Added none_as_null, et null()\nis now supported in order to persist a NULL value."},{"id":"text-379","heading":"Text","content":"astext_type &#8211; \nthe type to use for the\nJSON.Comparator.astext\naccessor on indexed attributes.  Defaults to types.Text."},{"id":"text-380","heading":"Text","content":"comparator_factory"},{"id":"text-381","heading":"Text","content":"alias of JSON.Comparator"},{"id":"text-382","heading":"Text","content":"class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)"},{"id":"text-383","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.json.JSON\nRepresent the PostgreSQL JSONB type.\nle JSONB    type stores arbitrary JSONB format data, e.g.:"},{"id":"text-384","heading":"Text","content":"data_table = Table(&#39;data_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;data&#39;, JSONB)\n)"},{"id":"text-385","heading":"Text","content":"avec engine.connect() comme conn:\n    conn.execute(\n        data_table.insérer(),\n        Les données = &quot;key1&quot;: &quot;value1&quot;, &quot;key2&quot;: &quot;value2&quot;\n    )"},{"id":"text-386","heading":"Text","content":"le JSONB    type includes all operations provided by\nJSON, including the same behaviors for indexing operations.\nIt also adds additional operators specific to JSONB, including\nJSONB.Comparator.has_key(), JSONB.Comparator.has_all(),\nJSONB.Comparator.has_any(), JSONB.Comparator.contains(),\net JSONB.Comparator.contained_by().\nComme le JSON    type, the JSONB    type does not detect\nin-place changes when used with the ORM, unless the\nsqlalchemy.ext.mutable    extension is used.\nCustom serializers and deserializers\nare shared with the JSON    class, using the json_serializer\net json_deserializer    keyword arguments.  These must be specified\nat the dialect level using create_engine(). When using\npsycopg2, the serializers are associated with the jsonb type using\npsycopg2.extras.register_default_jsonb    on a per-connection basis,\nin the same way that psycopg2.extras.register_default_json    is used\nto register these handlers with the json type."},{"id":"text-387","heading":"Text","content":"class Comparator(expr)"},{"id":"text-388","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.json.Comparator\nDefine comparison operations for JSON."},{"id":"text-389","heading":"Text","content":"contained_by(other)"},{"id":"text-390","heading":"Text","content":"Boolean expression.  Test if keys are a proper subset of the\nkeys of the argument jsonb expression."},{"id":"text-391","heading":"Text","content":"contient(other, **kwargs)"},{"id":"text-392","heading":"Text","content":"Boolean expression.  Test if keys (or array) are a superset\nof/contained the keys of the argument jsonb expression."},{"id":"text-393","heading":"Text","content":"has_all(other)"},{"id":"text-394","heading":"Text","content":"Boolean expression.  Test for presence of all keys in jsonb"},{"id":"text-395","heading":"Text","content":"has_any(other)"},{"id":"text-396","heading":"Text","content":"Boolean expression.  Test for presence of any key in jsonb"},{"id":"text-397","heading":"Text","content":"has_key(other)"},{"id":"text-398","heading":"Text","content":"Boolean expression.  Test for presence of a key.  Note that the\nkey may be a SQLA expression."},{"id":"text-399","heading":"Text","content":"comparator_factory"},{"id":"text-400","heading":"Text","content":"alias of JSONB.Comparator"},{"id":"text-401","heading":"Text","content":"class sqlalchemy.dialects.postgresql.MACADDR"},{"id":"text-402","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine"},{"id":"text-403","heading":"Text","content":"class sqlalchemy.dialects.postgresql.ARGENT"},{"id":"text-404","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL MONEY type."},{"id":"text-405","heading":"Text","content":"class sqlalchemy.dialects.postgresql.OID"},{"id":"text-406","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL OID type."},{"id":"text-407","heading":"Text","content":"class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)"},{"id":"text-408","heading":"Text","content":"Bases: sqlalchemy.types.Float\nThe SQL REAL type."},{"id":"text-409","heading":"Text","content":"__init__(precision=None, asdecimal=False, decimal_return_scale=None)"},{"id":"text-410","heading":"Text","content":"Construct a Float."},{"id":"text-411","heading":"Text","content":"Paramètres"},{"id":"text-412","heading":"Text","content":"précision – the numeric precision for use in DDL CREATE\nTABLE."},{"id":"text-413","heading":"Text","content":"asdecimal – the same flag as that of Numeric, but\ndefaults to Faux. Note that setting this flag to True\nresults in floating point conversion."},{"id":"text-414","heading":"Text","content":"decimal_return_scale &#8211; \nDefault scale to use when converting\nfrom floats to Python decimals.  Floating point values will typically\nbe much longer due to decimal inaccuracy, and most floating point\ndatabase types don’t have a notion of “scale”, so by default the\nfloat type looks for the first ten decimal places when converting.\nSpecifying this value will override that length.  Note that the\nMySQL float types, which do include “scale”, will use “scale”\nas the default for decimal_return_scale, if not otherwise specified."},{"id":"text-415","heading":"Text","content":"class sqlalchemy.dialects.postgresql.REGCLASS"},{"id":"text-416","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine\nProvide the PostgreSQL REGCLASS type."},{"id":"text-417","heading":"Text","content":"class sqlalchemy.dialects.postgresql.TSVECTOR"},{"id":"text-418","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine\nle postgresql.TSVECTOR    type implements the PostgreSQL\ntext search type TSVECTOR.\nIt can be used to do full text queries on natural language\ndocuments."},{"id":"text-419","heading":"Text","content":"class sqlalchemy.dialects.postgresql.UUID(as_uuid=False)"},{"id":"text-420","heading":"Text","content":"Bases: sqlalchemy.types.TypeEngine\nPostgreSQL UUID type.\nRepresents the UUID column type, interpreting\ndata either as natively returned by the DBAPI\nor as Python uuid objects.\nThe UUID type may not be supported on all DBAPIs.\nIt is known to work on psycopg2 and not pg8000."},{"id":"text-421","heading":"Text","content":"__init__(as_uuid=False)"},{"id":"text-422","heading":"Text","content":"Construct a UUID type."},{"id":"text-423","heading":"Text","content":"Paramètres"},{"id":"text-424","heading":"Text","content":"as_uuid=False – if True, values will be interpreted\nas Python uuid objects, converting to/from string via the\nDBAPI."},{"id":"text-425","heading":"Text","content":"Range Types\nThe new range column types found in PostgreSQL 9.2 onwards are\ncatered for by the following types:"},{"id":"text-426","heading":"Text","content":"class sqlalchemy.dialects.postgresql.INT4RANGE"},{"id":"text-427","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT4RANGE type."},{"id":"text-428","heading":"Text","content":"class sqlalchemy.dialects.postgresql.INT8RANGE"},{"id":"text-429","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL INT8RANGE type."},{"id":"text-430","heading":"Text","content":"class sqlalchemy.dialects.postgresql.NUMRANGE"},{"id":"text-431","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL NUMRANGE type."},{"id":"text-432","heading":"Text","content":"class sqlalchemy.dialects.postgresql.DATERANGE"},{"id":"text-433","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL DATERANGE type."},{"id":"text-434","heading":"Text","content":"class sqlalchemy.dialects.postgresql.TSRANGE"},{"id":"text-435","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSRANGE type."},{"id":"text-436","heading":"Text","content":"class sqlalchemy.dialects.postgresql.TSTZRANGE"},{"id":"text-437","heading":"Text","content":"Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine\nRepresent the PostgreSQL TSTZRANGE type."},{"id":"text-438","heading":"Text","content":"The types above get most of their functionality from the following\nmixin:"},{"id":"text-439","heading":"Text","content":"class sqlalchemy.dialects.postgresql.ranges.RangeOperators"},{"id":"text-440","heading":"Text","content":"This mixin provides functionality for the Range Operators\nlisted in Table 9-44 of the postgres documentation for Range\nFunctions and Operators. It is used by all the range types\nprovided in the postgres    dialect and can likely be used for\nany range types you create yourself.\nNo extra support is provided for the Range Functions listed in\nTable 9-45 of the postgres documentation. For these, the normal\nfunc()    object should be used."},{"id":"text-441","heading":"Text","content":"class comparator_factory(expr)"},{"id":"text-442","heading":"Text","content":"Bases: sqlalchemy.types.Comparator\nDefine comparison operations for range types."},{"id":"text-443","heading":"Text","content":"__ne__(other)"},{"id":"text-444","heading":"Text","content":"Boolean expression. Returns true if two ranges are not equal"},{"id":"text-445","heading":"Text","content":"adjacent_to(other)"},{"id":"text-446","heading":"Text","content":"Boolean expression. Returns true if the range in the column\nis adjacent to the range in the operand."},{"id":"text-447","heading":"Text","content":"contained_by(other)"},{"id":"text-448","heading":"Text","content":"Boolean expression. Returns true if the column is contained\nwithin the right hand operand."},{"id":"text-449","heading":"Text","content":"contient(other, **kw)"},{"id":"text-450","heading":"Text","content":"Boolean expression. Returns true if the right hand operand,\nwhich can be an element or a range, is contained within the\ncolumn."},{"id":"text-451","heading":"Text","content":"not_extend_left_of(other)"},{"id":"text-452","heading":"Text","content":"Boolean expression. Returns true if the range in the column\ndoes not extend left of the range in the operand."},{"id":"text-453","heading":"Text","content":"not_extend_right_of(other)"},{"id":"text-454","heading":"Text","content":"Boolean expression. Returns true if the range in the column\ndoes not extend right of the range in the operand."},{"id":"text-455","heading":"Text","content":"overlaps(other)"},{"id":"text-456","heading":"Text","content":"Boolean expression. Returns true if the column overlaps\n(has points in common with) the right hand operand."},{"id":"text-457","heading":"Text","content":"strictly_left_of(other)"},{"id":"text-458","heading":"Text","content":"Boolean expression. Returns true if the column is strictly\nleft of the right hand operand."},{"id":"text-459","heading":"Text","content":"strictly_right_of(other)"},{"id":"text-460","heading":"Text","content":"Boolean expression. Returns true if the column is strictly\nright of the right hand operand."},{"id":"text-461","heading":"Text","content":"Attention\nThe range type DDL support should work with any PostgreSQL DBAPI\ndriver, however the data types returned may vary. If you are using\npsycopg2, it’s recommended to upgrade to version 2.5 or later\nbefore using these column types."},{"id":"text-462","heading":"Text","content":"When instantiating models that use these column types, you should pass\nwhatever data type is expected by the DBAPI driver you’re using for\nthe column type. Pour psycopg2    these are\npsycopg2.extras.NumericRange,\npsycopg2.extras.DateRange,\npsycopg2.extras.DateTimeRange    et\npsycopg2.extras.DateTimeTZRange    or the class you’ve\nregistered with psycopg2.extras.register_range.\nPar exemple:"},{"id":"text-463","heading":"Text","content":"de psycopg2.extras importation DateTimeRange\nde sqlalchemy.dialects.postgresql importation TSRANGE"},{"id":"text-464","heading":"Text","content":"class RoomBooking(Base):"},{"id":"text-465","heading":"Text","content":"__tablename__ = &#39;room_booking&#39;"},{"id":"text-466","heading":"Text","content":"room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())"},{"id":"text-467","heading":"Text","content":"booking = RoomBooking(\n    room=101,\n    pendant=DateTimeRange(datetime(2013, 3, 23), None)\n)"},{"id":"text-468","heading":"Text","content":"PostgreSQL Constraint Types\nSQLAlchemy supports PostgreSQL EXCLUDE constraints via the\nExcludeConstraint    class:"},{"id":"text-469","heading":"Text","content":"class sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)"},{"id":"text-470","heading":"Text","content":"Bases: sqlalchemy.schema.ColumnCollectionConstraint\nA table-level EXCLUDE constraint.\nDefines an EXCLUDE constraint as described in the postgres\ndocumentation."},{"id":"text-471","heading":"Text","content":"__init__(*elements, **kw)"},{"id":"text-472","heading":"Text","content":"Créé un ExcludeConstraint    object.\nE.g.:"},{"id":"text-473","heading":"Text","content":"const = ExcludeConstraint(\n    (Column(&#39;period&#39;), &#39;&amp;&amp;&#39;),\n    (Column(&#39;group&#39;), &#39;=&#39;),\n    where=(Column(&#39;group&#39;) != &#39;some group&#39;)\n)"},{"id":"text-474","heading":"Text","content":"The constraint is normally embedded into the Table    construct\ndirectly, or added later using append_constraint():"},{"id":"text-475","heading":"Text","content":"some_table = Table(\n    &#39;some_table&#39;, metadata,\n    Column(&#39;id&#39;, Integer, primary_key=True),\n    Column(&#39;period&#39;, TSRANGE()),\n    Column(&#39;group&#39;, Chaîne)\n)"},{"id":"text-476","heading":"Text","content":"some_table.append_constraint(\n    ExcludeConstraint(\n        (some_table.c.period, &#39;&amp;&amp;&#39;),\n        (some_table.c.group, &#39;=&#39;),\n        where=some_table.c.group != &#39;some group&#39;,\n        name=&#39;some_table_excl_const&#39;\n    )\n)"},{"id":"text-477","heading":"Text","content":"Paramètres"},{"id":"text-478","heading":"Text","content":"*elements – A sequence of two tuples of the form (column, operator)    where\n“column” is a SQL expression element or a raw SQL string, most\ntypically a Column    object, and “operator” is a string\ncontaining the operator to use.   In order to specify a column name\nwhen a  Column    object is not available, while ensuring\nthat any necessary quoting rules take effect, an ad-hoc\nColumn    ou sql.expression.column()    object should be\nused."},{"id":"text-479","heading":"Text","content":"name – Optional, the in-database name of this constraint."},{"id":"text-480","heading":"Text","content":"deferrable – Optional bool.  If set, emit DEFERRABLE or NOT DEFERRABLE when\nissuing DDL for this constraint."},{"id":"text-481","heading":"Text","content":"initialement – Optional string.  If set, emit INITIALLY  when issuing DDL\nfor this constraint."},{"id":"text-482","heading":"Text","content":"using – Optional string.  If set, emit USING  when issuing DDL\nfor this constraint. Defaults to ‘gist’."},{"id":"text-483","heading":"Text","content":"where &#8211; \nOptional SQL expression construct or literal SQL string.\nIf set, emit WHERE \n when issuing DDL\nfor this constraint."},{"id":"text-484","heading":"Text","content":"Attention\nle ExcludeConstraint.where    argument to ExcludeConstraint    can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given.  DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER."},{"id":"text-485","heading":"Text","content":"Par exemple:\nde sqlalchemy.dialects.postgresql importation ExcludeConstraint, TSRANGE"},{"id":"text-486","heading":"Text","content":"class RoomBooking(Base):"},{"id":"text-487","heading":"Text","content":"__tablename__ = &#39;room_booking&#39;"},{"id":"text-488","heading":"Text","content":"room = Column(Integer(), primary_key=True)\n    pendant = Column(TSRANGE())"},{"id":"text-489","heading":"Text","content":"__table_args__ = (\n        ExcludeConstraint((&#39;room&#39;, &#39;=&#39;), (&#39;during&#39;, &#39;&amp;&amp;&#39;)),\n    )"},{"id":"text-490","heading":"Text","content":"PostgreSQL DML Constructs"},{"id":"text-491","heading":"Text","content":"sqlalchemy.dialects.postgresql.dml.insérer(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)"},{"id":"text-492","heading":"Text","content":"Construct a new Insert    object.\nThis constructor is mirrored as a public API function; voir insert()    for a full usage and argument description."},{"id":"text-493","heading":"Text","content":"class sqlalchemy.dialects.postgresql.dml.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)"},{"id":"text-494","heading":"Text","content":"Bases: sqlalchemy.sql.expression.Insert\nPostgreSQL-specific implementation of INSERT.\nAdds methods for PG-specific syntaxes such as ON CONFLICT."},{"id":"text-495","heading":"Text","content":"excluded"},{"id":"text-496","heading":"Text","content":"Provide the excluded    namespace for an ON CONFLICT statement\nPG’s ON CONFLICT clause allows reference to the row that would\nbe inserted, known as excluded. This attribute provides\nall columns in this row to be referenceable."},{"id":"text-497","heading":"Text","content":"on_conflict_do_nothing(constraint=None, index_elements=None, index_where=None)"},{"id":"text-498","heading":"Text","content":"Specifies a DO NOTHING action for ON CONFLICT clause.\nle constraint    et index_elements    arguments\nare optional, but only one of these can be specified."},{"id":"text-499","heading":"Text","content":"Paramètres"},{"id":"text-500","heading":"Text","content":"constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute."},{"id":"text-501","heading":"Text","content":"index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index."},{"id":"text-502","heading":"Text","content":"index_where &#8211; \nAdditional WHERE criterion that can be used to infer a\nconditional target index."},{"id":"text-503","heading":"Text","content":"on_conflict_do_update(constraint=None, index_elements=None, index_where=None, set_=None, where=None)"},{"id":"text-504","heading":"Text","content":"Specifies a DO UPDATE SET action for ON CONFLICT clause.\nEither the constraint    ou index_elements    argument is\nrequired, but only one of these can be specified."},{"id":"text-505","heading":"Text","content":"Paramètres"},{"id":"text-506","heading":"Text","content":"constraint – The name of a unique or exclusion constraint on the table,\nor the constraint object itself if it has a .name attribute."},{"id":"text-507","heading":"Text","content":"index_elements – A sequence consisting of string column names, Column\nobjects, or other column expression objects that will be used\nto infer a target index."},{"id":"text-508","heading":"Text","content":"index_where – Additional WHERE criterion that can be used to infer a\nconditional target index."},{"id":"text-509","heading":"Text","content":"set_ &#8211; \nRequired argument. A dictionary or other mapping object\nwith column names as keys and expressions or literals as values,\nspecifying the SET    actions to take.\nIf the target Column    specifies a “.key” attribute distinct\nfrom the column name, that key should be used."},{"id":"text-510","heading":"Text","content":"Attention\nThis dictionary does ne pas take into account\nPython-specified default UPDATE values or generation functions,\ne.g. those specified using Column.onupdate.\nThese values will not be exercised for an ON CONFLICT style of\nUPDATE, unless they are manually specified in the\nInsert.on_conflict_do_update.set_    dictionary."},{"id":"text-511","heading":"Text","content":"where &#8211; \nOptional argument. If present, can be a literal SQL\nstring or an acceptable expression for a WHERE    clause\nthat restricts the rows affected by DO UPDATE SET. Rows\nnot meeting the WHERE    condition will not be updated\n(effectively a DO NOTHING    for those rows)."},{"id":"text-512","heading":"Text","content":"psycopg2\nSupport for the PostgreSQL database via the psycopg2 driver."},{"id":"text-513","heading":"Text","content":"DBAPI\nDocumentation and download information (if applicable) for psycopg2 is available at:\nhttp://pypi.python.org/pypi/psycopg2/"},{"id":"text-514","heading":"Text","content":"Connecting\nConnect String:"},{"id":"text-515","heading":"Text","content":"postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]"},{"id":"text-516","heading":"Text","content":"psycopg2 Connect Arguments\npsycopg2-specific keyword arguments which are accepted by\ncreate_engine()    sont:"},{"id":"text-517","heading":"Text","content":"server_side_cursors: Enable the usage of “server side cursors” for SQL\nstatements which support this feature. What this essentially means from a\npsycopg2 point of view is that the cursor is created using a name, e.g.\nconnection.cursor(&#39;some name&#39;), which has the effect that result rows\nare not immediately pre-fetched and buffered after statement execution, but\nare instead left on the server and only retrieved as needed. SQLAlchemy’s\nResultProxy    uses special row-buffering\nbehavior when this feature is enabled, such that groups of 100 rows at a\ntime are fetched over the wire to reduce conversational overhead.\nNote that the Connection.execution_options.stream_results\nexecution option is a more targeted\nway of enabling this mode on a per-execution basis."},{"id":"text-518","heading":"Text","content":"use_native_unicode: Enable the usage of Psycopg2 “native unicode” mode\nper connection.  True by default."},{"id":"text-519","heading":"Text","content":"isolation_level: This option, available for all PostgreSQL dialects,\ncomprend le AUTOCOMMIT    isolation level when using the psycopg2\ndialect."},{"id":"text-520","heading":"Text","content":"client_encoding: sets the client encoding in a libpq-agnostic way,\nusing psycopg2’s set_client_encoding()    method."},{"id":"text-521","heading":"Text","content":"executemany_mode, executemany_batch_page_size,\nexecutemany_values_page_size: Allows use of psycopg2\nextensions for optimizing “executemany”-stye queries.  See the referenced\nsection below for details."},{"id":"text-522","heading":"Text","content":"use_batch_mode: this is the previous setting used to affect “executemany”\nmode and is now deprecated."},{"id":"text-523","heading":"Text","content":"Unix Domain Connections\npsycopg2 supports connecting via Unix domain connections.   When the hôte\nportion of the URL is omitted, SQLAlchemy passes None    to psycopg2,\nwhich specifies Unix-domain communication rather than TCP/IP communication:"},{"id":"text-524","heading":"Text","content":"create_engine(&quot;postgresql+psycopg2://user:password@/dbname&quot;)"},{"id":"text-525","heading":"Text","content":"By default, the socket file used is to connect to a Unix-domain socket\ndans /tmp, or whatever socket directory was specified when PostgreSQL\nwas built.  This value can be overridden by passing a pathname to psycopg2,\nusing hôte    as an additional keyword argument:"},{"id":"text-526","heading":"Text","content":"create_engine(&quot;postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql&quot;)"},{"id":"text-527","heading":"Text","content":"Empty DSN Connections / Environment Variable Connections\nThe psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the\nlibpq client library, which by default indicates to connect to a localhost\nPostgreSQL database that is open for “trust” connections.  This behavior can be\nfurther tailored using a particular set of environment variables which are\nprefixed with PG_..., which are  consumed by libpq    to take the place of\nany or all elements of the connection string.\nFor this form, the URL can be passed without any elements other than the\ninitial scheme:"},{"id":"text-528","heading":"Text","content":"engine = create_engine(&#39;postgresql+psycopg2://&#39;)"},{"id":"text-529","heading":"Text","content":"In the above form, a blank “dsn” string is passed to the psycopg2.connect()\nfunction which in turn represents an empty DSN passed to libpq."},{"id":"text-530","heading":"Text","content":"New in version 1.3.2: support for parameter-less connections with psycopg2."},{"id":"text-531","heading":"Text","content":"Voir également\nEnvironment Variables &#8211;\nPostgreSQL documentation on how to use PG_...\nenvironment variables for connections."},{"id":"text-532","heading":"Text","content":"Per-Statement/Connection Execution Options\nThe following DBAPI-specific options are respected when used with\nConnection.execution_options(), Executable.execution_options(),\nQuery.execution_options(), in addition to those not specific to DBAPIs:"},{"id":"text-533","heading":"Text","content":"isolation_level    &#8211; Set the transaction isolation level for the lifespan\nd&#39;un Connection    (can only be set on a connection, not a statement\nor query).   See Psycopg2 Transaction Isolation Level."},{"id":"text-534","heading":"Text","content":"stream_results    &#8211; Enable or disable usage of psycopg2 server side\ncursors &#8211; this feature makes use of “named” cursors in combination with\nspecial result handling methods so that result rows are not fully buffered.\nSi None    or not set, the server_side_cursors    option of the\nMoteur    est utilisé."},{"id":"text-535","heading":"Text","content":"max_row_buffer    &#8211; when using stream_results, an integer value that\nspecifies the maximum number of rows to buffer at a time.  This is\ninterpreted by the BufferedRowResultProxy, and if omitted the\nbuffer will grow to ultimately store 1000 rows at a time."},{"id":"text-536","heading":"Text","content":"Psycopg2 Fast Execution Helpers\nModern versions of psycopg2 include a feature known as\nFast Execution Helpers , which\nhave been shown in benchmarking to improve psycopg2’s executemany()\nperformance, primarily with INSERT statements, by multiple orders of magnitude.\nSQLAlchemy allows this extension to be used for all executemany()    style\ncalls invoked by an Moteur    when used with multiple parameter\nensembles, which includes the use of this feature both by the\nCore as well as by the ORM for inserts of objects with non-autogenerated\nprimary key values, by adding the executemany_mode    flag to\ncreate_engine():"},{"id":"text-537","heading":"Text","content":"engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;batch&#39;)"},{"id":"text-538","heading":"Text","content":"Changed in version 1.3.7: &#8211; le use_batch_mode    flag has been superseded\nby a new parameter executemany_mode    which provides support both for\npsycopg2’s execute_batch    helper as well as the execute_values\nhelper."},{"id":"text-539","heading":"Text","content":"Possible options for executemany_mode    include:"},{"id":"text-540","heading":"Text","content":"None    &#8211; By default, psycopg2’s extensions are not used, and the usual\ncursor.executemany()    method is used when invoking batches of statements."},{"id":"text-541","heading":"Text","content":"&#39;batch&#39;    &#8211; Uses psycopg2.extras.execute_batch    so that multiple copies\nof a SQL query, each one corresponding to a parameter set passed to\nexecutemany(), are joined into a single SQL string separated by a\nsemicolon.   This is the same behavior as was provided by the\nuse_batch_mode=True    flag."},{"id":"text-542","heading":"Text","content":"&#39;values&#39;&#8211; For Core insert()    constructs only (including those\nemitted by the ORM automatically), the psycopg2.extras.execute_values\nextension is used so that multiple parameter sets are grouped into a single\nINSERT statement and joined together with multiple VALUES expressions. Cette\nmethod requires that the string text of the VALUES clause inside the\nINSERT statement is manipulated, so is only supported with a compiled\ninsert()    construct where the format is predictable.  For all other\nconstructs,  including plain textual INSERT statements not rendered  by the\nSQLAlchemy expression language compiler, the\npsycopg2.extras.execute_batch        method is used.   It is therefore important\nto note that “values” mode implies that “batch” mode is also used for\nall statements for which “values” mode does not apply."},{"id":"text-543","heading":"Text","content":"For both strategies, the executemany_batch_page_size    et\nexecutemany_values_page_size    arguments control how many parameter sets\nshould be represented in each execution.  Because “values” mode implies a\nfallback down to “batch” mode for non-INSERT statements, there are two\nindependent page size arguments.  For each, the default value of None    means\nto use psycopg2’s defaults, which at the time of this writing are quite low at\n100.   For the execute_values    method, a number as high as 10000 may prove\nto be performant, whereas for execute_batch, as the number represents\nfull statements repeated, a number closer to the default of 100 is likely\nmore appropriate:"},{"id":"text-544","heading":"Text","content":"engine = create_engine(\n    &quot;postgresql+psycopg2://scott:tiger@host/dbname&quot;,\n    executemany_mode=&#39;values&#39;,\n    executemany_values_page_size=10000, executemany_batch_page_size=500)"},{"id":"text-545","heading":"Text","content":"Changed in version 1.3.7: &#8211; Added support for\npsycopg2.extras.execute_values. le use_batch_mode    flag is\nsuperseded by the executemany_mode    flag."},{"id":"text-546","heading":"Text","content":"Unicode with Psycopg2\nBy default, the psycopg2 driver uses the psycopg2.extensions.UNICODE\nextension, such that the DBAPI receives and returns all strings as Python\nUnicode objects directly &#8211; SQLAlchemy passes these values through without\nchange.   Psycopg2 here will encode/decode string values based on the\ncurrent “client encoding” setting; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf8, as a more useful default:"},{"id":"text-547","heading":"Text","content":"# postgresql.conf file"},{"id":"text-548","heading":"Text","content":"# client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8"},{"id":"text-549","heading":"Text","content":"A second way to affect the client encoding is to set it within Psycopg2\nlocally.   SQLAlchemy will call psycopg2’s\npsycopg2:connection.set_client_encoding()    method\non all new connections based on the value passed to\ncreate_engine()    using the client_encoding    parameter:"},{"id":"text-550","heading":"Text","content":"# set_client_encoding() setting;\n# works for *all* PostgreSQL versions\nengine = create_engine(&quot;postgresql://user:pass@host/dbname&quot;,\n                       client_encoding=&#39;utf8&#39;)"},{"id":"text-551","heading":"Text","content":"This overrides the encoding specified in the PostgreSQL client configuration.\nWhen using the parameter in this way, the psycopg2 driver emits\nSET client_encoding TO &#39;utf8&#39;    on the connection explicitly, and works\nin all PostgreSQL versions.\nNote that the client_encoding    setting as passed to create_engine()\nest not the same as the more recently added client_encoding    parameter\nnow supported by libpq directly.   This is enabled when client_encoding\nis passed directly to psycopg2.connect(), and from SQLAlchemy is passed\nusing the create_engine.connect_args    parameter:"},{"id":"text-552","heading":"Text","content":"engine = create_engine(\n    &quot;postgresql://user:pass@host/dbname&quot;,\n    connect_args=&#39;client_encoding&#39;: &#39;utf8&#39;)"},{"id":"text-553","heading":"Text","content":"# using the query string is equivalent\nengine = create_engine(&quot;postgresql://user:pass@host/dbname?client_encoding=utf8&quot;)"},{"id":"text-554","heading":"Text","content":"The above parameter was only added to libpq as of version 9.1 of PostgreSQL,\nso using the previous method is better for cross-version support."},{"id":"text-555","heading":"Text","content":"Disabling Native Unicode\nSQLAlchemy can also be instructed to skip the usage of the psycopg2\nUNICODE    extension and to instead utilize its own unicode encode/decode\nservices, which are normally reserved only for those DBAPIs that don’t\nfully support unicode directly.  Passing use_native_unicode=False    à\ncreate_engine()    will disable usage of psycopg2.extensions.UNICODE.\nSQLAlchemy will instead encode data itself into Python bytestrings on the way\nin and coerce from bytes on the way back,\nusing the value of the create_engine() encoding    parameter, which\ndefaults to utf-8.\nSQLAlchemy’s own unicode encode/decode functionality is steadily becoming\nobsolete as most DBAPIs now support unicode fully."},{"id":"text-556","heading":"Text","content":"Bound Parameter Styles\nThe default parameter style for the psycopg2 dialect is “pyformat”, where\nSQL is rendered using %(paramname)s    style.   This format has the limitation\nthat it does not accommodate the unusual case of parameter names that\nactually contain percent or parenthesis symbols; as SQLAlchemy in many cases\ngenerates bound parameter names based on the name of a column, the presence\nof these characters in a column name can lead to problems.\nThere are two solutions to the issue of a schema.Column    that contains\none of these characters in its name.  One is to specify the\nschema.Column.key    for columns that have such names:"},{"id":"text-557","heading":"Text","content":"measurement = Table(&#39;measurement&#39;, metadata,\n    Column(&#39;Size (meters)&#39;, Integer, clé=&#39;size_meters&#39;)\n)"},{"id":"text-558","heading":"Text","content":"Above, an INSERT statement such as measurement.insert()    will use\nsize_meters    as the parameter name, and a SQL expression such as\nmeasurement.c.size_meters &gt; dix    will derive the bound parameter name\nfrom the size_meters    key as well."},{"id":"text-559","heading":"Text","content":"Changed in version 1.0.0: &#8211; SQL expressions will use Column.key\nas the source of naming when anonymous bound parameters are created\nin SQL expressions; previously, this behavior only applied to\nTable.insert()    et Table.update()    parameter names."},{"id":"text-560","heading":"Text","content":"The other solution is to use a positional format; psycopg2 allows use of the\n“format” paramstyle, which can be passed to\ncreate_engine.paramstyle:"},{"id":"text-561","heading":"Text","content":"engine = create_engine(\n    &#39;postgresql://scott:tiger@localhost:5432/test&#39;, paramstyle=&#39;format&#39;)"},{"id":"text-562","heading":"Text","content":"With the above engine, instead of a statement like:"},{"id":"text-563","heading":"Text","content":"INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%(Taille (meters))s)\n&#39;Size (meters)&#39;: 1"},{"id":"text-564","heading":"Text","content":"we instead see:"},{"id":"text-565","heading":"Text","content":"INSERT INTO measurement (&quot;Size (meters)&quot;) VALUES (%s)\n(1, )"},{"id":"text-566","heading":"Text","content":"Where above, the dictionary style is converted into a tuple with positional\nstyle."},{"id":"text-567","heading":"Text","content":"Transactions\nThe psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations."},{"id":"text-568","heading":"Text","content":"Psycopg2 Transaction Isolation Level\nAs discussed in Transaction Isolation Level,\nall PostgreSQL dialects support setting of transaction isolation level\nboth via the isolation_level    parameter passed to create_engine(),\nas well as the isolation_level    argument used by\nConnection.execution_options(). When using the psycopg2 dialect, these\noptions make use of psycopg2’s set_isolation_level()    connection method,\nrather than emitting a PostgreSQL directive; this is because psycopg2’s\nAPI-level setting is always emitted at the start of each transaction in any\nCas.\nThe psycopg2 dialect supports these constants for isolation level:"},{"id":"text-569","heading":"Text","content":"READ COMMITTED"},{"id":"text-570","heading":"Text","content":"READ UNCOMMITTED"},{"id":"text-571","heading":"Text","content":"REPEATABLE READ"},{"id":"text-572","heading":"Text","content":"SERIALIZABLE"},{"id":"text-573","heading":"Text","content":"AUTOCOMMIT"},{"id":"text-574","heading":"Text","content":"NOTICE logging\nThe psycopg2 dialect will log PostgreSQL NOTICE messages\nvia the sqlalchemy.dialects.postgresql    logger.  When this logger\nis set to the logging.INFO    level, notice messages will be logged:"},{"id":"text-575","heading":"Text","content":"importation logging"},{"id":"text-576","heading":"Text","content":"logging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)"},{"id":"text-577","heading":"Text","content":"Above, it is assumed that logging is configured externally.  If this is not\nthe case, configuration such as logging.basicConfig()    must be utilized:"},{"id":"text-578","heading":"Text","content":"importation logging"},{"id":"text-579","heading":"Text","content":"logging.basicConfig()   # log messages to stdout\nlogging.getLogger(&#39;sqlalchemy.dialects.postgresql&#39;).setLevel(logging.INFO)"},{"id":"text-580","heading":"Text","content":"HSTORE type\nle psycopg2    DBAPI includes an extension to natively handle marshalling of\nthe HSTORE type.   The SQLAlchemy psycopg2 dialect will enable this extension\nby default when psycopg2 version 2.4 or greater is used, and\nit is detected that the target database has the HSTORE type set up for use.\nIn other words, when the dialect makes the first\nconnection, a sequence like the following is performed:"},{"id":"text-581","heading":"Text","content":"Request the available HSTORE oids using\npsycopg2.extras.HstoreAdapter.get_oids().\nIf this function returns a list of HSTORE identifiers, we then determine\nque le HSTORE    extension is present.\nThis function is skipped if the version of psycopg2 installed is\nless than version 2.4."},{"id":"text-582","heading":"Text","content":"If the use_native_hstore    flag is at its default of True, et\nwe’ve detected that HSTORE    oids are available, the\npsycopg2.extensions.register_hstore()    extension is invoked for all\nles liaisons."},{"id":"text-583","heading":"Text","content":"le register_hstore()    extension has the effect of all Python\ndictionaries being accepted as parameters regardless of the type of target\ncolumn in SQL. The dictionaries are converted by this extension into a\ntextual HSTORE expression.  If this behavior is not desired, disable the\nuse of the hstore extension by setting use_native_hstore    à Faux    comme\nfollows:"},{"id":"text-584","heading":"Text","content":"engine = create_engine(&quot;postgresql+psycopg2://scott:tiger@localhost/test&quot;,\n            use_native_hstore=Faux)"},{"id":"text-585","heading":"Text","content":"le HSTORE    type is still supported when the\npsycopg2.extensions.register_hstore()    extension is not used.  It merely\nmeans that the coercion between Python dictionaries and the HSTORE\nstring format, on both the parameter side and the result side, will take\nplace within SQLAlchemy’s own marshalling logic, and not that of psycopg2\nwhich may be more performant."},{"id":"text-586","heading":"Text","content":"pg8000\nSupport for the PostgreSQL database via the pg8000 driver."},{"id":"text-587","heading":"Text","content":"DBAPI\nDocumentation and download information (if applicable) for pg8000 is available at:\nhttps://pythonhosted.org/pg8000/"},{"id":"text-588","heading":"Text","content":"Connecting\nConnect String:"},{"id":"text-589","heading":"Text","content":"postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]"},{"id":"text-590","heading":"Text","content":"Remarque\nThe pg8000 dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2."},{"id":"text-591","heading":"Text","content":"Unicode\npg8000 will encode / decode string values between it and the server using the\nPostgreSQL client_encoding    parameter; by default this is the value in\nle postgresql.conf    file, which often defaults to SQL_ASCII.\nTypically, this can be changed to utf-8, as a more useful default:"},{"id":"text-592","heading":"Text","content":"#client_encoding = sql_ascii # actually, defaults to database\n                             # encoding\nclient_encoding = utf8"},{"id":"text-593","heading":"Text","content":"le client_encoding    can be overridden for a session by executing the SQL:\nSET CLIENT_ENCODING TO ‘utf8’;\nSQLAlchemy will execute this SQL on all new connections based on the value\npassed to create_engine()    using the client_encoding    parameter:"},{"id":"text-594","heading":"Text","content":"engine = create_engine(\n    &quot;postgresql+pg8000://user:pass@host/dbname&quot;, client_encoding=&#39;utf8&#39;)"},{"id":"text-595","heading":"Text","content":"pg8000 Transaction Isolation Level\nThe pg8000 dialect offers the same isolation level settings as that\nof the psycopg2 dialect:"},{"id":"text-596","heading":"Text","content":"READ COMMITTED"},{"id":"text-597","heading":"Text","content":"READ UNCOMMITTED"},{"id":"text-598","heading":"Text","content":"REPEATABLE READ"},{"id":"text-599","heading":"Text","content":"SERIALIZABLE"},{"id":"text-600","heading":"Text","content":"AUTOCOMMIT"},{"id":"text-601","heading":"Text","content":"New in version 0.9.5: support for AUTOCOMMIT isolation level when using\npg8000."},{"id":"text-602","heading":"Text","content":"psycopg2cffi\nSupport for the PostgreSQL database via the psycopg2cffi driver."},{"id":"text-603","heading":"Text","content":"DBAPI\nDocumentation and download information (if applicable) for psycopg2cffi is available at:\nhttp://pypi.python.org/pypi/psycopg2cffi/"},{"id":"text-604","heading":"Text","content":"Connecting\nConnect String:"},{"id":"text-605","heading":"Text","content":"postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]"},{"id":"text-606","heading":"Text","content":"psycopg2cffi    is an adaptation of psycopg2, using CFFI for the C\ncouche. This makes it suitable for use in e.g. PyPy. Documentation\nis as per psycopg2."},{"id":"text-607","heading":"Text","content":"py-postgresql\nSupport for the PostgreSQL database via the py-postgresql driver."},{"id":"text-608","heading":"Text","content":"DBAPI\nDocumentation and download information (if applicable) for py-postgresql is available at:\nhttp://python.projects.pgfoundry.org/"},{"id":"text-609","heading":"Text","content":"Connecting\nConnect String:"},{"id":"text-610","heading":"Text","content":"postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]"},{"id":"text-611","heading":"Text","content":"Remarque\nThe pypostgresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndriver is psycopg2."},{"id":"text-612","heading":"Text","content":"pygresql\nSupport for the PostgreSQL database via the pygresql driver."},{"id":"text-613","heading":"Text","content":"DBAPI\nDocumentation and download information (if applicable) for pygresql is available at:\nhttp://www.pygresql.org/"},{"id":"text-614","heading":"Text","content":"Connecting\nConnect String:"},{"id":"text-615","heading":"Text","content":"postgresql+pygresql://user:password@host:port/dbname[?key=value&key=value...]"},{"id":"text-616","heading":"Text","content":"Remarque\nThe pygresql dialect is not tested as part of SQLAlchemy’s continuous\nintegration and may have unresolved issues.  The recommended PostgreSQL\ndialect is psycopg2."},{"id":"text-617","heading":"Text","content":"zxjdbc\nSupport for the PostgreSQL database via the zxJDBC for Jython driver."},{"id":"text-618","heading":"Text","content":"DBAPI\nDrivers for this database are available at:\nhttp://jdbc.postgresql.org/"},{"id":"text-619","heading":"Text","content":"Connecting\nConnect String:"},{"id":"text-620","heading":"Text","content":"postgresql+zxjdbc://scott:tiger@localhost/db"},{"id":"text-621","heading":"Text","content":"Click to rate this post!\n                                   \n                               [Total: 0  Average: 0]"}],"media":{"primary_image":""},"relations":[{"rel":"canonical","href":"https://tutos-gameserver.fr/2019/10/21/postgresql-documentation-sqlalchemy-1-3-bien-choisir-son-serveur-d-impression/"},{"rel":"alternate","href":"https://tutos-gameserver.fr/2019/10/21/postgresql-documentation-sqlalchemy-1-3-bien-choisir-son-serveur-d-impression/llm","type":"text/html"},{"rel":"alternate","href":"https://tutos-gameserver.fr/2019/10/21/postgresql-documentation-sqlalchemy-1-3-bien-choisir-son-serveur-d-impression/llm.json","type":"application/json"},{"rel":"llm-manifest","href":"https://tutos-gameserver.fr/llm-endpoints-manifest.json","type":"application/json"}],"http_headers":{"X-LLM-Friendly":"1","X-LLM-Schema":"1.1.0","Content-Security-Policy":"default-src 'none'; img-src * data:; style-src 'unsafe-inline'"},"license":"CC BY-ND 4.0","attribution_required":true,"allow_cors":false}