1.
range and spatial
2.
secondary and primary key
3.
secondary and spatial
4.
range and primary key
Q 1 / 56
1.
DynamoDB table
2.
DynamoDB trigger
3.
DynamoDB item
4.
DynamoDB index
Q 2 / 56
1.
partial
2.
sparse
3.
compound
4.
multikey
Q 3 / 56
1.
Set up Hadoop in pseudo-distributed mode.
2.
Set up HBASE in local mode.
3.
Set up HBASE in pseudo-distributed mode.
4.
Set up Hadoop in local mode.
Q 4 / 56
1.
DynamoDB
2.
BigTable
3.
Redis
4.
MongoDB
Q 5 / 56
1.
medium
2.
short
3.
single bit
4.
long
Q 6 / 56
1.
Use lAM policy conditions
2.
Use lAM roles
3.
Use VPC endpoint
4.
Use lAM policies
Q 7 / 56
1.
Scores.
2.
Ids.
3.
Values.
4.
Keys.
Q 8 / 56
1.
Dump the collection data, drop the collection, create a new collection and shard key, import the data.
2.
Add second shard key and drop the first shard key.
3.
Dump the collection data, drop the collection, presplit the data, create a new collection and shard key, import the data.
4.
Drop and recreate the shard key.
Q 9 / 56
1.
Security systems.
2.
Database systems.
3.
Storage systems.
4.
Query systems.
Q 10 / 56
1.
Elasticache using Memcached.
2.
DynamoDB.
3.
DynamoDB Accelerator (DAX).
4.
Elasticache using Redis.
Q 11 / 56
1.
Keep all information for an entity in a single row. Store related entities in adjacent rows.
2.
Keep all information for an entity in a single row.
3.
Split entities across multiple rows if the entity data is over thousands of MBs, or if it does not need atomic updates and reads.
4.
Split entities across multiple rows if the entity data is over hundreds of MBs, or if it does not need atomic updates and reads.
Q 12 / 56
1.
Key-value data model; transactionally consistent with ACID semantics.
2.
Document data model; transactionally consistent with ACID semantics.
3.
Key-value data model; transactions with tunable consistency.
4.
Document data model; transactions with tunable consistency.
Q 13 / 56
1.
Designate all three fields as the primary key.
2.
Concatenate all three fields into one new field, then designate that new field as the primary key.
3.
Designate two fields of the three fields as the primary key.
4.
Concatenate two fields into one new field, then designate that new field and the remaining field as the primary key.
Q 14 / 56
1.
Designate all three fields as the primary key.
2.
Concatenate all three fields into one new field, then designate that new field as the primary key.
3.
Designate two fields of the three fields as the primary key.
4.
Concatenate two fields into one new field, then designate that new field and the remaining field as the primary key.
Q 15 / 56
1.
multi-valued identifiers
2.
string identifiers
3.
timesstamps
4.
frequently updated identifiers
Q 16 / 56
1.
Neptune
2.
DocumentDB
3.
DynamoDB
4.
Amazon Aurora
Q 17 / 56
1.
Memorystore
2.
Datastore
3.
Firebase
4.
Bigtable
Q 18 / 56
1.
Set up HBase in local mode.
2.
Set up Hadoop in pseudo-distributed mode.
3.
Set up HBase in pseudo-distributed mode.
4.
Set up Hadoop in local mode.
Q 19 / 56
1.
Use IAM roles.
2.
Use IAM plicy conditions.
3.
Use a VPC endpoint.
4.
Use IAM plicies.
Q 20 / 56
1.
Concatenate all three fields into into one new field, than disignate that new field as the primary key.
2.
Concatenate two fileds into one new field, than disignate that new field and the remaining field as the primary key.
3.
Designate all three fields as the primary key.
4.
Designate two filds of the three fields as the primary key.
Q 21 / 56
1.
keys
2.
values
3.
scroes
4.
ids
Q 22 / 56
1.
DynamoDb
2.
Redis
3.
MongoDB
4.
Bigtable
Q 23 / 56
1.
MATCH (:Person)-->(:Card)-->(:Company) RETURN count(vehicle)
2.
Match (:Person)-->(:Car):(vehicle:Car)-->(:Company) RETURN count(vehicle)
3.
MATCH (:Person)-->(vehicle:Car)-->(:Company) RETURN count(vehicle)
4.
MATCH (:Person)-->(:Card), (vehicle:Car)-->(:Company) RETURN count(vehicle)
Q 24 / 56
1.
Bigtable
2.
GraphDB
3.
DynamoDB
4.
Cosmos DB
Q 25 / 56
1.
Create a custom app profile to route batch updates.
2.
Create a custom app profile to route the batch update from that client.
3.
Update the default app profile to route the natch update from that client.
4.
Use the default app profile to route batch updates.
Q 26 / 56
1.
security systems
2.
databse systems
3.
query systems
4.
storage systems
Q 27 / 56
1.
The queried key value expired in the last two secodns.
2.
The queried key value exists, but has no associated expire value.
3.
The queried key value does not exist.
4.
There are two expired keys with this value.
Q 28 / 56
1.
Create an index on the key value used as the primary key.
2.
Create an index on the key value used as the foreign key.
3.
Create a multicolumn index on the key value used as the foreign key and the most unique column in the document.
4.
Create a multicolumn index on the key value used as the primary and also the forign key.
Q 29 / 56
1.
$group
2.
$match
3.
$lookup
4.
$project
Q 30 / 56
1.
Cloud SQL
2.
Cloud Spanner
3.
Cloud Firestore
4.
Cloud Firebase
Q 31 / 56
1.
Upload data to S3 VPC endpoint, Use the Neptun loader to load from s3 into your Neptune instance
2.
Ad data to a Kinesis stream, and use the Neptune loade to load from S# into your Neptun instance.
3.
Add data to a Kisnesis stream, and create a Kinesis stream VPC endpoint. Use the Nepune loader to load from S3into your Neptune instance.
4.
Upload data to S Use the neptune loader to load from S3 into your Neptune instance.
Q 32 / 56
1.
Neo4j
2.
Cassandra
3.
Redis
4.
MyS
Q 33 / 56
Q 34 / 56
1.
DynamoDb table
2.
DynamoDB trigger
3.
DynamoDB item
4.
DYnamoDB index
Q 35 / 56
1.
secondery and primary key
2.
secondary and spatial
3.
range and spatial
4.
range and primary key
Q 36 / 56
1.
horizontally, infinitely
2.
vertically, horizontally
3.
vertically, infinitely
4.
horizonally, vertically
Q 37 / 56
1.
a relational databse
2.
a columnstore databse
3.
a document databse
4.
a graph database
Q 38 / 56
1.
Rows become labes: bales become nodes.
2.
Tables become labels: rows become nodes.
3.
Tables become collections: rows become items.
4.
Rows become collections: tables become items.
Q 39 / 56
1.
sparse
2.
compound
3.
partial
4.
multikey
Q 40 / 56
1.
Delete the .monorc.js file and restart mongo shell.
2.
Use the mongo shell to create a command with --norc option
3.
Rem all lines in the .mongorc.js file ad restart mongo shell.
4.
Use the mongo shell to create a command with --nodedefault option.
Q 41 / 56
1.
long
2.
short
3.
medium
4.
a single bit
Q 42 / 56
1.
map
2.
set
3.
list
4.
stack
Q 43 / 56
1.
aws dynamodb query --table-name MusicCollection --key file://key.json
2.
aws dynamodb get-item --table-name MusicCollection --key file://key.json
3.
aws dynamodb select --table-name MusicCollection --key file://key.json
4.
aws dynamodb put-item --table-name MusicCollection --key file://key.json
Q 44 / 56
1.
the amount of service calls
2.
the number of minutes
3.
the amount of service costs
4.
the number of nines
Q 45 / 56
1.
You assign a default AWS encryption key to your table to encrypt data.
2.
You create an AWS encryption key and assign it to your table to encrypt data.
3.
None. Data is encrypted by default.
4.
You create an AWS encryption key and assign it to your database to encrypt data.
Q 46 / 56
1.
Implement a SortedSet object to generate a value.
2.
Use the GUID keyword to generate a value.
3.
Implement a List object to generate a value.
4.
Use the INCR keyword to generate a value
Q 47 / 56
1.
smembers
2.
returnall
3.
sunion
4.
sismember
Q 48 / 56
1.
detailQueryExecution()
2.
showPlan()
3.
explain()
4.
describe()
Q 49 / 56
1.
The global secondary indexes in DynamoDB are consistent, and are not guaranteed to return correct results.
2.
The global secondary indexes in DynamoDB are transactionally consistent, and are guaranteed to return correct results.
3.
The global secondary indexes in DynamoDB are partially consistent, and are not guaranteed to return correct results.
4.
The global secondary indexes in DynamoDB are eventually consistent, and are not guaranteed to return correct results.
Q 50 / 56
1.
ADD mystream * sensor-id 1234 temperature 19.8 1518951480106-1
2.
UPDATE mystream * sensor-id 1234 temperature 19.8 1518951480106-3
3.
XADD mystream * sensor-id 1234 temperature 9.8 1518951480106-0
4.
INSERT mystream * sensor-d 1234 temperature 19.8 15181480106-2
Q 51 / 56
var indexCollection = function(db) {return co(function*() {...});};
1.
`const results = yield db.table('restaurants').createIndex({"name": 1}, null); return results;`
2.
`const results = yield db.collection('restaurants').createIndex({"name": 0}, null); return results;`
3.
`const results = yield db.collection('restaurants').createIndex({"name": 1}, null); return results;`
4.
`const results = yield db.table('restaurants').createIndex({"name": 0}, null); return results;`
Q 52 / 56
MATCH (c:Company {name: 'Neo4j'}) RETURN c, MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p, MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type; MATCH (c:Company {name: 'Neo4j'}) RETURN c, MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p, MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type MATCH (c:Company {name: 'Neo4j'}) RETURN c AND MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p, AND MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type; MATCH (c:Company {name: 'Neo4j'}) RETURN c;MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p; MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type; ### Q53. You need to create a scalable database that supports immutable writes. What do you use?
1.
MATCH (c:Company {name: 'Neo4j'}) RETURN c, MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p,
MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type;
2.
MATCH (c:Company {name: 'Neo4j'}) RETURN c, MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p,
MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type
3.
MATCH (c:Company {name: 'Neo4j'}) RETURN c AND MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p,
AND MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type;
4.
MATCH (c:Company {name: 'Neo4j'}) RETURN c;MATCH (p:Person) WHERE p.name = 'Jennifer' RETURN p;
MATCH (t:Technology)-[:LIKES]-(a:Person {name: 'Jennifer'}) RETURN t.type;
Q 53 / 56
1.
Neo4j
2.
Redis
3.
MySQL
4.
MongoDB
Q 54 / 56
1.
graph
2.
key-value
3.
document
4.
columnstore
Q 55 / 56
1.
Cassandra
2.
Bigtable
3.
Redis
4.
HBase
Q 56 / 56