当前位置:网站首页>[tcapulusdb knowledge base] how to get started with tcapulus SQL driver?

[tcapulusdb knowledge base] how to get started with tcapulus SQL driver?

2022-06-24 01:43:00 Tcapulus Jun

【TcaplusDB The knowledge base 】 How to get started Tcaplus SQL Driver?

brief introduction

TcaplusDB SQL Driver/C++ It's used to connect C++ Application to TcaplusDB Server's TcaplusDBConnector. Used in a traditional way SQL Of the statement TcaplusDB The server .

Binary installation

TcaplusDB SQL Driver/C++ Binary distributions are provided in a platform specific compressed format . Binary distributions are also compressed in a more general format tar File or zip The compressed package provides . TcaplusDB SQL Driver/C++ Binary distributions are available on several platforms , Compressed tar File or zip Packed in compressed package , It is expressed here as PACKAGE.tar.gz or PACKAGE.zip. To decompress tar file , Use this command in the expected installation directory : tar zxvf PACKAGE.tar.gz from Zip Compression pack installation (.zip), Use WinZip Or something else you can read “.zip" File tool , Unzip the file to a location of your choice .

TcaplusDB and Tcaplus SQL Driver Version compatibility

  • TcaplusDB edition : TcaplusDB SQL Driver/C++ be based on TcaplusDB 3.53.0 Development .
  • C++ standard : TcaplusDB SQL Driver/C++ be based on C++98 Standard development .

TcaplusDB SQL Driver/C++ Quick start

adopt TcaplusDB SQL Driver/C++ Connect TcaplusDB, The link format is as follows : tcp://<instance_ip>:<instance_port>?app_id=<app_id>&zone_id=<zone_id> among <instance_ip> For instance host ip;<instance_port> Open port , The default is 9999;<app_id> For the business id,<zone_id> For the game area id. Examples of connecting and reading data :

#include "cppconn/statement.h"
#include "cppconn/resultset.h"
#include "cppconn/connection.h"
#include "cppconn/driver.h"
#include "cppconn/metadata.h"
#include "cppconn/resultset.h"
#include "cppconn/resultset_metadata.h"
#include "cppconn/prepared_statement.h"
#include "tcapsql_driver.h"
#include <iostream>
​
int main()
{
    sql::Driver *driver;
    sql::Connection *conn;
    sql::Statement *state;
    sql::ResultSet *rset;
    try {
    driver = sql::tcapsql::get_driver_instance();
    conn = driver->connect("tcp://192.168.0.1:9999?app_id=2&zone_id=3", "", "123456");
    state = conn->createStatement();
    rset = state->executeQuery("SELECT * FROM user WHERE user_id = '10000'");
    while (rset->next())
    {
        std::cout << rset->getString(1) << std::endl;
    }
    delete rset;
    delete state;
    delete conn;
    } catch (sql::SQLException &e) {
        std::cout << "catch errCode " << e.getErrorCode() << " errMsg " << e.what() << std::endl;
        return EXIT_FAILURE;
    }
    return 0;
}

Makefile

export TCAPSQL_DRIVER=/youpath/TcaplusSQLDriver3.53.1.204605.x86_64_release_20210521/release/x86_64/;
CPPFILE=$(wildcard *.cpp)
LIBS +=-L$(TCAPSQL_DRIVER)/lib -Wl,-Bstatic -ltcaplus_sqldriver -Wl,-Bdynamic -lpthread -lanl
 
INC = -I$(TCAPSQL_DRIVER)/include/
.PHONY: all clean 
all:
    g++ -o mytest $(CPPFILE) $(INC) ${LIBS}  
clean:
    rm -f mytest

TcaplusDB Table definition and table creation

Use xml Definition TcaplusDB surface , Examples are as follows :

<?xml version="1.0" encoding="GBK" standalone="yes" ?>
<metalib name="demo_table" tagsetversion="1" version="1">
    <struct name="user" version="1" primarykey="user_id,server_id" splittablekey="user_id">
        <entry name="user_id" type="string" size="450" desc=" user ID"/>
        <entry name="server_id" type="uint64" desc=" The server ID" />
        <entry name="nick_name" type="string" size="50" desc=" nickname "/>
        <entry name="desc" type="string" size="1024" desc=" Description information "/>
        <entry name="state" type="Tinyuint" defaultvalue="0" desc=" User state  0 : AVALIABLE, 1 DELETED"/>
        <index name="index1" column="user_id"/>
        <index name="index2" column="user_id,server_id"/>
    </struct>
</metalib>
  • Elements metalib yes xml The root element of the file .
  • contain primarykey Of struct Element is a table , It doesn't contain primarykey Of struct The element is an ordinary structure .
  • Every time the table structure is modified , The version attribute value needs to be added accordingly 1, The initial version is always 1.
  • primarykey Property specifies the primary key field ; about generic surface , You can specify at most 8 Primary key fields , about list surface , You can specify 7 individual .
  • splittablekey Property is equivalent to a partition key (shard key),TcaplusDB Tables are split and stored to multiple storage nodes .splittablekey Must be one of the primary key fields , A good splittablekey It should be highly decentralized , This means a wide range of values , String type is recommended .
  • desc Property contains the description of the current element .
  • entry Element defines a field , Supported value types include int32,string,char,int64,double,short etc. .
  • index Element defines an index , The index must contain splittablekey. Because you can use the primary key to query the table , Therefore, the index should not be the same as the primary key attribute .

TcaplusDB SQL Driver/C++ Example

TcaplusDB SQL Driver/C++: obtain driver example

By calling get_driver_instance Can be obtained driver example , Once per process is enough .

sql::Driver * driver = sql::tcapsql::get_driver_instance();

Use the back end TcaplusDB Connection information generation URL And establish a connection

// IP
#define DIR_HOST "192.168.0.1"
// port
#define DIR_PORT "9999"
// app_id
#define APP_ID 2
// zone_id
#define ZONE_ID 3
// signature
#define SIGNATURE "123456"
char url[120] = {0};
snprintf(url, 120, "tcp://%s:%d?app_id=%d&zone_id=%d", DIR_HOST, DIR_PORT , APP_ID, ZONE_ID);
//  The intermediate parameter is the thread name of the sending and receiving thread ,conn The sending and receiving thread name of is the same , Note: the same sending and receiving thread is used to receive and send packets .
// 1 A transceiver thread can handle about 200 individual conn Request response . however conn More will cause higher delay , Adjust as needed .
sql::Connection * conn = driver->connect(url, "instance1", SIGNATURE);
​

Statement Object can perform basic functions SQL Inquire about , And pass ResultSet Class to get the result . From the driver.connect() Acquired Connection Object call createStatement() Method , You can create a Statement example , Then you can call executeQuery(String) Methods use SQL Statement execution SELECT Inquire about . If you want to update the data in the database , have access to executeUpdate(String SQL) Method . When the statement is SELECT when , You can call getResultSet() Method to get the query result . When the statement is UPDATE,INSERT,DELETE when , It can be done by Statement The instance getUpdateCount() Get the number of rows affected . Here's the execution SELECT Examples of queries :

sql::Statement * state = conn->createStatement();
sql::ResultSet * rset = state->executeQuery("SELECT * FROM user WHERE user_id = '10000'");
while (rset->next())
{
    std::cout << rset->getString(1) << std::endl;
}
delete rset;
delete state;

sql The type and TcaplusDB Type correspondence

**TcaplusDB and sql Data type correspondence

TcaplusDB data type

sql type

int8

sql::DataType::TINYINT (TINYINT)

uint8

sql::DataType::TINYINT (TINYINT UNSIGNED)

int16

sql::DataType::SMALLINT (SMALLINT)

uint16

sql::DataType::SMALLINT (SMALLINT UNSIGNED)

int32

sql::DataType::INTEGER (INT)

uint32

sql::DataType::INTEGER (INT UNSIGNED)

int64

sql::DataType::BIGINT (BIGINT)

uint64

sql::DataType::BIGINT (BIGINT UNSIGNED)

float

sql::DataType::REAL (FLOAT)

double

sql::DataType::DOUBLE (DOUBLE)

string

sql::DataType::VARCHAR (VARCHAR)

binary

sql::DataType::VARBINARY (BLOB)

take TcaplusDB Error codes are mapped to SQL Driver SQLState Code

TcaplusDB Some common error codes and descriptions are as follows :

Error code

describe

explain

2309

TXHDB_ERR_RECORD_IS_NOT_SET_TTL

The record has no expiration time

261

TXHDB_ERR_RECORD_NOT_EXIST

The record does not exist

-261

TXHDB_ERR_INVALID_ARGUMENTS

Internal parameter error

-273

PROXY_ERR_INVALID_PARAMS

Internal parameter error

-275

API_ERR_OVER_MAX_KEY_FIELD_NUM

The number of primary key fields exceeds the limit ,Generic The table limit is 4, List The table limit is 3

-517

TXHDB_ERR_INVALID_MEMBER_VARIABLE_VALUE

Internal parameter error

-525

SVR_ERR_FAIL_TIMEOUT

Storage tier request timed out

-529

PROXY_ERR_NO_NEED_ROUTE_BATCHGET_ACTION_MSG_WHEN_NODE_IS_IN_SYNC_STATUS

Some commands in lossy relocation do not support , At present, most of them are non-destructive relocation , There will be no error code

-531

API_ERR_OVER_MAX_VALUE_FIELD_NUM

The number of non primary key fields exceeds the limit ,Generic The table limit is 128, List Table limit :127

-773

TXHDB_ERR_ALREADY_OPEN

Engine file repeatedly opened

-781

SVR_ERR_FAIL_SHORT_BUFF

buf Too short , internal error

-785

PROXY_ERR_NO_NEED_ROUTE_WHEN_NODE_IS_IN_REJECT_STATUS

Some commands in lossy relocation do not support , At present, most of them are non-destructive relocation , There will be no error code

-787

API_ERR_OVER_MAX_FIELD_NAME_LEN

The field name size exceeds the limit (32B)

-1037

SVR_ERR_FAIL_SYSTEM_BUSY

The storage tier is overloaded , Please contact the DBA

-1043

API_ERR_OVER_MAX_FIELD_VALUE_LEN

The length of the field value means that it exceeds the limit

-1293

SVR_ERR_FAIL_RECORD_EXIST

insert The record for already exists

-1299

API_ERR_FIELD_NOT_EXSIST

The accessed field does not exist , Make sure the field name is spelled correctly

-1549

SVR_ERR_FAIL_INVALID_FIELD_NAME

The accessed field does not exist , Please make sure that the table structure has been updated to the back end

-1555

API_ERR_FIELD_TYPE_NOT_MATCH

The data type of the field value does not match its definition type

-1792

GEN_ERR_TABLE_READONLY

The table is in read-only mode , Please check RCU, WCU Or whether the capacity exceeds the threshold set by the user

-1805

SVR_ERR_FAIL_VALUE_OVER_MAX_LEN

The record exceeds the maximum length , Cannot exceed... After serialization 10MB

-1811

API_ERR_PARAMETER_INVALID

Parameter error

-2048

GEN_ERR_TABLE_READ_DELETE

Capacity overrun , Readable and erasable , But it can't be written

-2061

SVR_ERR_FAIL_INVALID_FIELD_TYPE

Table field type error

-2067

API_ERR_OPERATION_TYPE_NOT_MATCH

Table type and command word do not match . such as generic The watch is used list The command word of the table

-2304

GEN_ERR_ACCESS_DENIED

Access denied

-2317

SVR_ERR_FAIL_SYNC_WRITE

tcapsvr_fail_sync_write

-2323

API_ERR_PACK_MESSAGE

Wrong packing , Please contact the Administrator

-2560

GEN_ERR_INVALID_ARGUMENTS

Parameter error , Please contact the Administrator

-2573

SVR_ERR_FAIL_WRITE_RECORD

Write engine error , Please contact the Administrator

-2579

API_ERR_UNPACK_MESSAGE

Unpacking error , Please contact the Administrator

-2816

GEN_ERR_UNSUPPORT_OPERATION

Unsupported operations , Please contact the Administrator

-2823

ENG_ERR_ENGINE_ERROR

Engine error , Please contact the Administrator

-2829

SVR_ERR_FAIL_DELETE_RECORD

Failed to delete engine record , Please contact the Administrator

-3072

GEN_ERR_NOT_ENOUGH_MEMORY

Out of memory

-3079

ENG_ERR_DATA_ERROR

Data error , Please contact the Administrator

-3085

SVR_ERR_FAIL_DATA_ENGINE

SetFieldName The operation specified the wrong field

-3091

API_ERR_OVER_MAX_RECORD_NUM

The number of tables accessed by the client exceeds the limit , It usually doesn't happen , The appearance indicates that bug, Please contact the Administrator

-3328

GEN_ERR_NOT_SATISFY_INSERT_FOR_SORTLIST

sortlist Related error codes .

-3341

SVR_ERR_FAIL_RESULT_OVERFLOW

The field value size exceeds the limit of its defined type

-3347

API_ERR_INVALID_COMMAND

The command of repackaging does not match

-3603

API_ERR_NO_MORE_RECORD

There's no record

-3859

API_ERR_OVER_KEY_FIELD_NUM

Exceed the maximum key Number limit , At present, the biggest 8 individual key,list One less watch

-4109

SVR_ERR_FAIL_INVALID_INDEX

list Data type element subscript is out of range

-4115

API_ERR_OVER_VALUE_FIELD_NUM

Exceed the maximum value Number limit , At present, the biggest 256 individual value,list One less watch

-4365

SVR_ERR_FAIL_OVER_MAXE_FIELD_NUM

The maximum number of fields is exceeded

-4371

API_ERR_OBJ_NEED_INIT

API uninitialized ( Or not RegisterTable)

-4621

SVR_ERR_FAIL_MISS_KEY_FIELD

The request is missing a primary key field or an index field

-4627

API_ERR_INVALID_DATA_SIZE

The data size is incorrect , Usually, the user's local table definition is inconsistent with the server .

-4883

API_ERR_INVALID_ARRAY_COUNT

Invalid array size

-5137

PROXY_ERR_PACK_MSG

Packaging failed

-5139

API_ERR_INVALID_UNION_SELECT

invalid union Of select

-5393

PROXY_ERR_SEND_MSG

Failed to send message

-5395

API_ERR_MISS_PRIMARY_KEY

Missing primary key in request

-5649

PROXY_ERR_ALLOCATE_MEMORY

Out of memory

-5651

API_ERR_UNSUPPORT_FIELD_TYPE

Unsupported field type

-5905

PROXY_ERR_PARSE_MSG

Failed to parse message

-5907

API_ERR_ARRAY_BUFFER_IS_SMALL

Memory is too small

-6157

SVR_ERR_FAIL_LIST_FULL

list The number of table elements exceeds the defined range , Please set element elimination

-6161

PROXY_ERR_INVALID_MSG

Invalid message

-6412

SVR_ERR_FAIL_LOW_VERSION

Outdated Version

-6417

PROXY_ERR_FAILED_PROC_REQUEST_BECAUSE_NODE_IS_IN_SYNC_STASUS

Detrimental to relocation , Some commands do not support , At present, it is mainly non-destructive relocation , We won't have that problem

-6669

SVR_ERR_FAIL_HIGH_VERSION

Version too high

-6673

PROXY_ERR_KEY_FIELD_NUM_IS_ZERO

No, key Field

-6925

SVR_ERR_FAIL_INVALID_RESULT_FLAG

result_flag Setting error , Please refer to SDK in result_flag explain

-6929

PROXY_ERR_LACK_OF_SOME_KEY_FIELDS

Missing... In request key Field

-7181

SVR_ERR_FAIL_PROXY_STOPPING

The access layer is stopping , Business needs no attention ,API Or automatically switch to a new access layer

-7185

PROXY_ERR_FAILED_TO_FIND_NODE

Routing node not found , Please contact the Administrator

-7437

SVR_ERR_FAIL_SVR_READONLY

The storage tier is read-only , Usually, the disk is full or the active / standby switch is in progress

-7441

PROXY_ERR_INVALID_COMPRESS_TYPE

Unsupported compression type

-7443

API_ERR_INCOMPATIBLE_META

Incompatible table structure , Please check whether the local table structure is compatible with the server table structure .

-7669

API_ERR_PACK_ARRAY_DATA

Failed to pack array field

-7693

SVR_ERR_FAIL_SVR_READONLY_BECAUSE_IN_SLAVE_MODE

The request is sent to the storage tier standby node , It may be that the active / standby switch is in progress . If the error code is returned for a long time , Please contact the Administrator

-7697

PROXY_ERR_REQUEST_OVERSPEED

Request speed exceeds quota

-7949

SVR_ERR_FAIL_INVALID_VERSION

Please check the lock , The requested record version number is inconsistent with the actual record version number

-7953

PROXY_ERR_SWIFT_TIMEOUT

Access layer timeout

-7955

API_ERR_PACK_UNION_DATA

pack union The data of failure

-8205

SVR_ERR_FAIL_SYSTEM_ERROR

Storage tier internal error , Please contact the Administrator

-8209

PROXY_ERR_SWIFT_ERROR

Access layer transaction non timeout class error , Please contact the Administrator

-8211

API_ERR_PACK_STRUCT_DATA

pack struct Failure

-8461

SVR_ERR_FAIL_OVERLOAD

The storage tier is overloaded

-8465

PROXY_ERR_DIRECT_RESPONSE

The access layer returns packets directly , It's equivalent to not taking business logic , Usually used to test API

-8467

API_ERR_UNPACK_ARRAY_DATA

Unpacking array field failed

-8717

SVR_ERR_FAIL_NOT_ENOUGH_DADADISK_SPACE

Insufficient disk space on storage tier data disk

-8721

PROXY_ERR_INIT_TLOG

Failed to initialize log module

-8723

API_ERR_UNPACK_UNION_DATA

Unpack union The data of failure

-8973

SVR_ERR_FAIL_NOT_ENOUGH_ULOGDISK_SPACE

Storage layer binlog There is not enough disk space on the pipelined disk

-8979

API_ERR_UNPACK_STRUCT_DATA

Unpack struct The data of failure

-9229

SVR_ERR_FAIL_UNSUPPORTED_PROTOCOL_MAGIC

internal error , Unsupported access layer protocol magic

-9233

PROXY_ERR_REQUEST_ACCESS_CTRL_REJECT

Access layer denied access

-9235

API_ERR_INVALID_INDEX_NAME

index non-existent

-9485

SVR_ERR_FAIL_UNSUPPORTED_PROTOCOL_CMD

Unsupported command word

-9489

PROXY_ERR_NOT_ALL_NODES_ARE_IN_NORMAL_OR_WAIT_STATUS

The traversal request returns the error code , Because the table is in the relocation state

-9491

API_ERR_MISS_PARTKEY_FIELD

The lack of partkey Field

-9745

PROXY_ERR_ALREADY_CACHED_REQUEST_TIMEOUT

When routing changes ,cache Your request timed out

-9747

API_ERR_ALLOCATE_MEMORY

Failed to allocate memory

-9997

SVR_ERR_FAIL_MERGE_VALUE_FIELD

Merge value Field failed , internal error , Please contact the Administrator

-10001

PROXY_ERR_FAILED_TO_CACHE_REQUEST

When routing changes ,cache request was aborted

-10003

API_ERR_GET_META_SIZE

api_get_meta_size_error

-10253

SVR_ERR_FAIL_CUT_VALUE_FIELD

Merge value Field failed , internal error , Please contact the Administrator

-10257

PROXY_ERR_NOT_EXIST_CACHED_REQUEST

When routing changes , non-existent cache request

-10259

API_ERR_MISS_BINARY_VERSION

internal error , Binary field missing version

-10509

SVR_ERR_FAIL_PACK_FIELD

internal error , Failed to pack field , Please contact the Administrator

-10513

PROXY_ERR_FAILED_NOT_ENOUGH_CACHE_BUFF

When routing changes ,buff Insufficient

-10765

SVR_ERR_FAIL_UNPACK_FIELD

Storage tier unpacking failed

-10769

PROXY_ERR_FAILED_PROCESS_CACHED_REQUEST

The access layer failed to process the cached request

-10771

API_ERR_INVALID_RESULT_FLAG

invalid result flag

-11021

SVR_ERR_FAIL_LOW_API_VERSION

api Outdated Version , Please upgrade api

-11027

API_ERR_OVER_MAX_LIST_INDEX_NUM

list The table exceeds the maximum number of elements

-11277

SVR_ERR_COMMAND_AND_TABLE_TYPE_IS_MISMATCH

Method of operation table does not exist

-11283

API_ERR_INVALID_OBJ_STATUE

uninitialized , It usually appears in traversal requests

-11533

SVR_ERR_FAIL_TO_FIND_CACHE

The storage tier did not find a data fragment , internal error , Please contact the Administrator

-11537

PROXY_ERR_SWIFT_SEND_BUFFER_FULL

Send buffer full , API Processing response is too slow

-11539

API_ERR_INVALID_REQUEST

Invalid request

-11789

SVR_ERR_FAIL_TO_FIND_META

Storage tier did not find table definition , internal error , Please contact the Administrator

-11793

PROXY_ERR_REQUEST_OVERLOAD_CTRL_REJECT

Access layer overload

-12045

SVR_ERR_FAIL_TO_GET_CURSOR

Storage layer failed to get traversal cursor , Maybe there are too many traversal requests at the same time , You don't usually encounter

-12049

PROXY_ERR_SQL_QUERY_MGR_IS_NULL

Index not created ?

-12051

API_ERR_TABLE_NAME_MISSING

Table name not set

-12301

SVR_ERR_FAIL_OUT_OF_USER_DEF_RANGE

Beyond the user-defined range

-12305

PROXY_ERR_SQL_QUERY_INVALID_SQL_TYPE

invalid sql request

-12307

API_ERR_SOCKET_SEND_BUFF_IS_FULL

Request send failed , Network overload , Please contact the Administrator .

-12557

SVR_ERR_INVALID_ARGUMENTS

Internal parameter error

-12561

PROXY_ERR_GET_TRANSACTION_FAILED

Allocation failed , Please contact the administrator to increase the configuration of transaction concurrency

-12563

API_ERR_INVALID_MAGIC

magic incorrect , Communication problem , If the problem persists , Please contact the Administrator

-12817

PROXY_ERR_ADD_TRANSACTION_FAILED

Allocation failed , Please contact the administrator to increase the configuration of transaction concurrency

-12819

API_ERR_TABLE_IS_NOT_EXIST

Table does not exist

-13069

SVR_ERR_NULL_CACHE

The data fragment does not exist

-13073

PROXY_ERR_QUERY_FROM_INDEX_SERVER_FAILED

Failed to query global index

-13075

API_ERR_SHORT_BUFF

buff Too short

-13329

PROXY_ERR_QUERY_FROM_INDEX_SERVER_TIMEOUT

Query global index timeout

-13331

API_ERR_FLOW_CONTROL

api_flow_control

-13581

SVR_ERR_METALIB_VERSION_LESS_THAN_ENTRY_VERSION

the metalib version in request is less than entry version

-13585

PROXY_ERR_QUERY_FOR_CONVERT_TCAPLUS_REQ_TO_INDEX_SERVER_REQ_FAILED

Resolve the global index sql Statement failure

-13587

API_ERR_COMPRESS_SWITCH_NOT_SUPPORTED_REGARDING_THIS_CMD

Protocol compression called on commands that do not support protocol compression SetCompressSwitch

-13837

SVR_ERR_INVALID_SELECT_ID_FOR_UNION

internal error , Please contact the Administrator

-13841

PROXY_ERR_QUERY_INDEX_FIELD_NOT_EXIST

Index field does not exist

-13843

API_ERR_FAILED_TO_FIND_ROUTE

Background network exception , The request could not be sent successfully , If it persists, please contact the Administrator

-14093

SVR_ERR_CAN_NOT_FIND_SELECT_ENTRY_FOR_UNION

internal error , Please contact the Administrator

-14097

PROXY_ERR_THIS_SQL_IS_NOT_SUPPORT

Don't support SQL

-14099

API_ERR_OVER_MAX_PKG_SIZE

The inserted record exceeds the size limit (1MB)

-14353

PROXY_ERR_NO_SUCH_APPID

Access layer authentication failed , The service authentication information was not found

-14605

SVR_ERR_TCAPSVR_PROCESS_NOT_NORMAL

tcapsvr process in abnormal

-14609

PROXY_ERR_NO_APP_USER_PASSWD

The access layer cannot find the user authentication information

-14865

PROXY_ERR_NO_APP_USER_PASSWD_RECORD

The access layer cannot find the user authentication information

-15117

SVR_ERR_INVALID_ARRAY_COUNT

Invalid array size

-15121

PROXY_ERR_NO_APP_USER_OPT

The access layer cannot find the user authentication information

-15123

API_ERR_ADD_RECORD

failed to add a new record into request

-15373

SVR_ERR_REJECT_REQUEST_BECAUSE_ROUTE_IN_REJECT_STATUS

Refuse request ,svr In the state of relocation , Routing error .

-15377

PROXY_ERR_NO_APP_USER_OPT_RECORD

The access layer did not find the user authentication information

-15379

API_ERR_ZONE_IS_NOT_EXIST

There is no the zone

-15635

API_ERR_TRAVERSER_IS_NOT_EXIST

The traverser does not exist

-15885

SVR_ERR_FAIL_INVALID_FIELD_VALUE

Invalid field value

-16141

SVR_ERR_FAIL_PROTOBUF_FIELD_GET

PB surface GetRecord operation failed , Please contact the Administrator

-16147

API_ERR_INSTANCE_INIT_LOG_FAILURE

Failed to initialize log module

-16389

TXHDB_ERR_INVALID_VALUE_DATABLOCK_NUM

Storage layer value The number of blocks is abnormal , Please contact the Administrator

-16397

SVR_ERR_FAIL_PROTOBUF_VALUE_BUFF_EXCEED

PB Table non primary key field value exceeds the limit size (256KB)

-16403

API_ERR_CONNECTOR_IS_ABNORMAL

Abnormal connection

-16653

SVR_ERR_FAIL_PROTOBUF_FIELD_UPDATE

PB surface FieldSetRecord operation failed , Please contact the Administrator

-16659

API_ERR_WAIT_RSP_TIMEOUT

Overtime

-16901

TXHDB_ERR_COMPRESSION_FAIL

Compression failed

-16909

SVR_ERR_FAIL_PROTOBUF_FIELD_INCREASE

PB surface FieldIncRecord operation failed , Please contact the Administrator

-17157

TXHDB_ERR_DECOMPRESSION_FAIL

Decompression failed

-17165

SVR_ERR_FAIL_PROTOBUF_FIELD_TAG_MISMATCH

PB surface tag Mismatch

-18445

SVR_ERR_FAIL_DOCUMENT_NOT_SUPPORT

I won't support it Document Class request

-18701

SVR_ERR_FAIL_PARTKEY_INSERT_NOT_SUPPORT

I won't support it InsertByPartkey

-18957

SVR_ERR_FAIL_SQL_FILTER_FAILED

Distributed index , perform sql Filtering failed

-33541

TXHDB_ERR_ADD_LSIZE_EXCEEDS_MAX_TSD_VALUE_BUFF_SIZE

internal error , Please contact the Administrator

-34053

TXHDB_ERR_TOO_BIG_KEY_BIZ_SIZE

key Exceed the maximum limit 1KB

-34309

TXHDB_ERR_TOO_BIG_VALUE_BIZ_SIZE

value Exceed the maximum limit , 10MB

-34565

TXHDB_ERR_INDEX_NO_EXIST

Index does not exist

-39685

TXHDB_ERR_FILE_EXCEEDS_LSIZE_LIMIT

Single data fragmentation exceeds 256GB The maximum limit of , Write failure

Known problems and SQL limitations

SQL The limitation of grammar

Suppose the data sheet demo There are six fields :key1,key2,key3,value1,value2, among ,key1,key2 by partkey,key1,key2,key3 form fullkey.

The insert

The insert operation must display the values of all specified fields , Unless its default value is specified in the table definition . Insert the of a single record SQL The sentence form is as follows : INSERT INTO demo (key1,key2,key3,value1,value2) values (x1,x2,x3,x4,x5); Insert multiple records SQL The sentence form is as follows : INSERT INTO demo (key1,key2,key3,value1,value2) values (x1,x2,x3,x4,x5); INSERT INTO demo (key1,key2,key3,value1,value2) values (x6,x7,x8,x9,x10); ### where Clause syntax restrictions

When the global index is not configured ,where The filter field in the clause consists of two parts :1、 Required part :partkey or fullkey;2、 Optional part : Filter conditions . partkey or fullkey: Only equivalent queries can be performed , And the composition partkey or fullkey You can only use... Between the fields of AND Connect ; Filter conditions : Support NOT、=、>、<、!=、>=、<= Operator , And multiple filter conditions can be used AND or OR Connect , Support key Field or value Field . 1、 Use fullkey When deleting, modifying and querying where The clause form is as follows : WHERE key1=x1 AND key2=x2 AND key3=x3; 2、 Use fullkey+ When deleting, modifying, and querying filter conditions where The clause form is as follows , If the filter conditions include OR Operator , The filter conditions must be bracketed : WHERE key1=x1 AND key2=x2 AND key3=x3 AND ( Filter conditions ); 3、 Use partkey When deleting, modifying and querying where The clause form is as follows : WHERE key1=x1 AND key2=x2; 4、 Use partkey+ When deleting, modifying, and querying filter conditions where The clause form is as follows , If the filter conditions include OR Operator , The filter conditions must be bracketed : WHERE key1=x1 AND key2=x2 AND ( Filter conditions ); When where The required part in the clause is partkey when ,where Clause may result in multiple records .

Delete operation

1、 adopt fullkey When deleting a single record ,SQL The statement has the following two forms :

DELETE FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3;
DELETE FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3 AND ( Filter conditions );

2、 adopt fullkey When deleting records in batch ,SQL The sentence form is as follows : DELETE FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3; DELETE FROM demo WHERE key1=x4 AND key2=x5 AND key3=x6; Delete / Batch delete operation is not supported partkey, The batch deletion operation does not support the filter criteria .

update operation

1、 adopt fullkey When updating a single record ,SQL The statement has the following two forms :

UPDATE demo SET value1=x1, value2=x2 WHERE key1=x1 AND key2=x2 AND key3=x3;
UPDATE demo SET value1=x1, value2=x2 WHERE key1=x1 AND key2=x2 AND key3=x3 AND ( Filter conditions );

2、 adopt fullkey When updating records in batch ,SQL The sentence form is as follows : UPDATE demo SET value1=x1, value2=x2 WHERE key1=x3 AND key2=x4 AND key3=x5; UPDATE demo SET value1=x6, value2=x7 WHERE key1=x8 AND key2=x9 AND key3=x10; to update / Batch update operation does not support partkey, The batch update operation does not support the filter criteria .

Query operation

1、 adopt fullkey When querying a single record ,SQL Statements have the following four forms :

SELECT * FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3;
SELECT * FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3 AND ( Filter conditions );
SELECT key1,value1 FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3;
SELECT key1,value1 FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3 AND ( Filter conditions );

2、 adopt partkey When querying multiple records ,SQL Statements have the following four forms :

SELECT * FROM demo WHERE key1=x1 AND key2=x2;
SELECT * FROM demo WHERE key1=x1 AND key2=x2 AND ( Filter conditions );
SELECT key1,value1 FROM demo WHERE key1=x1 AND key2=x2;
SELECT key1,value1 FROM demo WHERE key1=x1 AND key2=x2 AND ( Filter conditions );

3、 adopt fullkey When querying records in batch ,SQL The form of the statement is as follows : SELECT * FROM demo WHERE key1=x1 AND key2=x2 AND key3=x3 OR key1=x4 AND key2=x5 AND key3=x6; The batch query operation does not support the filter criteria . 4、 adopt partkey When querying records in batch ,SQL The form of the statement is as follows : SELECT * FROM demo WHERE key1=x1 AND key2=x2 OR key1=x3 AND key2=x4; The batch query operation does not support the filter criteria .

Global index query

TcaplusDB Provide sql Index query with query statement , among ,sql The field in the query criteria must be a field with a global index , in addition , If it is an aggregate query , Then the field of the aggregate query must also be a field with a global index ; An index query request , The current limit is to return... At most 3000 Bar record ;

Supported by sql Query statement

Conditions of the query

Support =, >, >=, <, <=, !=, between, in, not in, like, not like, and, or , such as :

SELECT * FROM `mail` WHERE user_id>="10004" AND server_id=100;
SELECT * FROM `mail` WHERE user_id BETWEEN 10000 AND 10003 AND server_id=100;
SELECT * FROM `mail` WHERE user_id="10000" AND server_id=100 AND mail_id LIKE "210507%";
SELECT * FROM `mail` WHERE user_id>="10004" OR server_id<=200;

Be careful :between When inquiring ,between a and b, The corresponding query range is [a, b], such as between 1 and 100, Yes, it will contain 1 and 100 These two values are , That is, the query range is [1,100] Be careful :like Query supports fuzzy matching , among "%" wildcard , matching 0 One or more characters ; “_” wildcard , matching 1 Characters ;

Paging query

Support limit offset Paging query , such as : SELECT * FROM mail WHERE user_id>"10000" LIMIT 100 OFFSET 2; Be careful : At present limit Must be with offset Use it with , That is not supported limit 1 perhaps limit 0,1 such ;

Aggregate query

Currently supported aggregate queries include :sum, count, max, min, avg, such as : SELECT server_id, COUNT(DISTINCT user_id), COUNT(*), SUM(state) FROM \mail` WHERE user_id>="10000" AND server_id=100;` Be careful : Aggregate queries do not support limit offset, namely limit offset Don't take effect ; Be careful : At present, only count Support distinct, namely select count(distinct(a)) from table where a > 1000; Nothing else supports distinct

Partial field query

Supports querying the values of some fields , such as : SELECT user_id, \subject` FROM mail WHERE user_id>="10000";` ### Don't support sql Query statement

Mixing aggregate queries with non aggregate queries is not supported

select , a, b from table where a > 1000;select sum(a), a, b from table where a > 1000;select count(), * from table where a > 1000;

I won't support it order by Inquire about

select * from table where a > 1000 order by a;

I won't support it group by Inquire about

select * from table where a > 1000 group by a;

I won't support it having Inquire about

select sum(a) from table where a > 1000 group by a having sum(a) > 10000;

Multi table joint query is not supported

select * from table1 where table1.a > 1000 and table1.a = table2.b;

Nesting... Is not supported select Inquire about

select * from table where a > 1000 and b in (select b from table where b < 5000);

Other queries not supported

(1) I won't support it join Inquire about ; (2) I won't support it union Inquire about ; (3) Does not support similar select a+b from table where a > 1000 Query for ; (4) Does not support similar select from table where a+b > 1000 Query for ;(5) Does not support similar select from table where a >= b Query for ; (6) Other queries not mentioned are not supported ;


TcaplusDB It's a distributed product of Tencent NoSQL database , The code for storage and scheduling is completely self-developed . With cache + Landing fusion architecture 、PB Levels of storage 、 Millisecond delay 、 Lossless horizontal expansion and complex data structure . At the same time, it has rich ecological environment 、 Easy migration 、 Extremely low operation and maintenance costs and five nine high availability features . Customer coverage game 、 Internet 、 government affairs 、 Finance 、 Manufacturing and the Internet of things .

原网站

版权声明
本文为[Tcapulus Jun]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/11/20211116121632596y.html

随机推荐