AltME: Databases

Messages

TomBon
you have more than one solution, the first is a simple serial SELECT on each table -> compare the output for equal.
of course this produce unnecessary DB overhead but I guess you won't feel any speed difference except you are
serving a whole city concurrently. another, better one is a JOIN or UNION.
SELECT table_name1.column_name(s), ...
FROM table_name1
LEFT JOIN table_name2
ON table_name1.column_name=table_name2.column_name
the JOIN direction (LEFT,RIGHT,INNER) for your reference table is important here.
the resultset is a table containing BOTH columns. if both having a value -> match, if one is empty then you don't.
index both fields to accelerate the query and use something like the free SQLyog
to test different queries to make debugging easier for you.
while you situation reminds me to myself, sitting infront of a monochrom asthon tate dot some decades ago
and asking what next?, you should 'bite' yourself now thru the rest. It won't help you on longterm if you don't.
afsanehsamim
@TomBon: my query for joining two tables is :insert db["select * from data LEFT JOIN data1 ON data.oneone=data1.oneone"]     and output is :[
    ["c" "a" "t" "a" "e" "r" "o" "a" none none none none none none none none]
]    plz tell me what should i write in query that i get values instead of none in output ?
afsanehsamim
guys when i enter correct value in form the above join query works properly... i need help for writing queries which other condition,it means if user enter wrong value ,it joins with first table but dose comparing indicidually  and shows error message.
the output of this query insert db[{select * from data LEFT JOIN data1 ON data.oneone=data1.oneone}]
is : [
    ["c" "a" "t" "a" "e" "r" "o" "a" "c" "a" "t" "a" "e" "r" "o" "a"]
]
afsanehsamim
is there anyone who can help me ??
i compare each field of tables with each other like :insert db["select data.oneone,data1.oneone from data LEFT JOIN data1 ON data.oneone=data1.oneone"]
results: copy db
probe results
insert db["select data.onetwo,data1.onetwo from data LEFT JOIN data1 ON data.onetwo=data1.onetwo"]
results: copy db
probe results
insert db["select data.onethree,data1.onethree from data LEFT JOIN data1 ON data.onethree=data1.onethree"]
results: copy db
probe results .....
i got result
i need codes for showing message to user ,it mean after each joining ,it should show user that  value  is correct or no
afsanehsamim
guys ! could you plz tell me after comparing values of two tables how can we show the output on web page?
after writing queries :foreach row read/custom mysql://root@localhost/test ["select data.oneone,data1.oneone from data LEFT JOIN data1 ON data.oneone=data1.oneone"] [print row]
foreach row read/custom mysql://root@localhost/test ["select data.onetwo,data1.onetwo from data LEFT JOIN data1 ON data.onetwo=data1.onetwo"] [print row] ....
i got this results:c c
a none
t t
a none
e none
r none
o none
a none
now how can i write query for everyvalues which are same and print correct message on web page?

afsanehsamim
hey guys... i have just 2days time for my project ! could you help me?
i could not do the last step ... i should show result of comparing values on web page

TomBon
a quick update on elasticsearch.
Currently I have reached 2TB datasize (~85M documents) on a single node.
Queries now starting to slow down but the system is very stable even under
heavy load. While queries in average took between 50-250ms against a
dataset around 1TB the same queries are now in a range between 900-1500 ms.
The average allocated java heap is around 9GB which is nearly 100% of the
max heap size by a 15 shards and 0 replicas setting.
elasticsearch looks like a very good candidate for handling big data with
a need for 'near realtime' analysis. Classical RDBMS like mysql and postgresql
where grilled at around 150-500GB. Another tested candidate was MongoDB
which was great too but since it stores all metadata and fields uncompressed
the waste of diskspace was ridiculous high. Furthermore query execution times
differs unexpectable without any known reason by factor 3.
Tokyo Cabinet started fine but around 1TB I have noticed file integrity problems
which leads into endless restoring/repairing procedures. Adding sharding logic
by coding an additional layer wasn't very motivating but could solve this issue.
Within the next six months the datasize should reached the 100TB mark.
Would be interesting to see how elasticsearch will scale and how many
nodes are nessesary to handle this efficiently.
Maxim
when you talk about "documents" what type of documents are they?
Gregg
Thanks for the info Tomas.
TomBon
crawled html/mime embedded documents/images etc. as plain compressed source (avg. 25kb) and 14 searchable metafields (ngram) to train different NN types for pattern recognition.
Maxim
thanks  :-)

MaxV
I have a problem with RebDB: how works db-select/group?
Example:
>> db-select/where/group/count [ID title post date]  archive  [find post "t" ] [ID]
** User Error: Invalid number of group by columns
** Near: to error! :value
Endo
Don't you need to use aggregate functions when you grouping?

Last message posted 348 weeks ago.