commit 772309f84c1daae2e8cbd3c40764d404d6e8aeb6
Author: Shaohua Wang <shaohua.wang@oracle.com>
Date:   Fri Jan 15 10:32:57 2016 +0800

    BUG#22530768 Innodb freeze running REPLACE statements
    
    we can see from the hang stacktrace, srv_monitor_thread is blocked
    when getting log_sys::mutex, so that sync_arr_wake_threads_if_sema_free
    cannot get a change to break the mutex deadlock.
    
    The fix is simply removing any mutex wait in srv_monitor_thread.
    
    Patch is reviewed by Sunny over IM.

commit 0f5490c7fff651091d1bc0947ad26c1712df390e
Merge: 415f65b d4115d9
Author: Bjorn Munch <bjorn.munch@oracle.com>
Date:   Mon Jan 11 14:20:36 2016 +0100

    Updated copyright year in user visible text

commit d4115d987d630ea5a61dc35f314efdbea0d5d3e2
Author: Bjorn Munch <bjorn.munch@oracle.com>
Date:   Mon Jan 11 14:10:58 2016 +0100

    Updated copyright year in user visible text

commit 415f65b500798631f8b8fa090e53984d3c04935a
Merge: 8ebc5ee da96d6b
Author: Yashwant Sahu <yashwant.sahu@oracle.com>
Date:   Mon Jan 11 14:45:15 2016 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit da96d6beae895c880da7df778281ebc10ec31bcc
Author: Yashwant Sahu <yashwant.sahu@oracle.com>
Date:   Mon Jan 11 14:44:49 2016 +0530

    Bug #22295186:   CERTIFICATE VALIDATION BUG IN MYSQL MAY ALLOW MITM
    
    Test fix for 5.5 and 5.6

commit 8ebc5ee5ea1763ad09b39d1e926d4e551f51d1d0
Merge: 928663a 648d587
Author: Yashwant Sahu <yashwant.sahu@oracle.com>
Date:   Mon Jan 11 09:24:06 2016 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 648d5877241a235e3870b6043ca94a543271f579
Author: Yashwant Sahu <yashwant.sahu@oracle.com>
Date:   Mon Jan 11 09:23:31 2016 +0530

    Bug #22295186:    CERTIFICATE VALIDATION BUG IN MYSQL MAY ALLOW MITM.
    
    Test Fix

commit 928663a01464d59abf2d9b9184b675447a219806
Merge: d0148e9 041bd3b
Author: Yashwant Sahu <yashwant.sahu@oracle.com>
Date:   Mon Jan 11 07:13:51 2016 +0530

    Merge branch 'mysql-5.5' into mysql-5.6
    
    Conflicts:
    	sql-common/client.c

commit 041bd3b4457959db4925e6f8788790882634e945
Author: Yashwant Sahu <yashwant.sahu@oracle.com>
Date:   Mon Jan 11 07:09:13 2016 +0530

    Bug #22295186:  CERTIFICATE VALIDATION BUG IN MYSQL MAY ALLOW MITM

commit d0148e9fda868be70f73584a95bf50b2b5ea7f1b
Author: Tor Didriksen <tor.didriksen@oracle.com>
Date:   Wed Jan 6 16:42:13 2016 +0100

    Bug#22504264 MYSQL COMMUNITY SERVER PACKAGE FOR SOLARIS 10 X86/AMD64 BROKEN
    
    Our binaries depend on libstlport, which is not part of Solaris by default,
    so we ship it as part of mysql packages.
    
    The problem was that this dependency was not noted in the client library.
    
    Fix: bacport more from:
       Bug#16555106 FIX BROKEN BUILD WITH SOLARIS/GCC 64BIT MODE
    
    Specifically: add -R$ORIGIN/../lib when linking the client library.

commit c0cd5f674ef6bd7d8c3d73e51aef87e3681a4d70
Merge: 042ac81 021794e
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Fri Jan 8 06:50:49 2016 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 021794eb3079c727200c4eaae547f83a71197d35
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Fri Jan 8 06:46:59 2016 +0530

    Bug #22232332: SAVING TEXT FIELD TO TEXT VARIABLE IN A
                   PROCEDURE RESULTS IN GARBAGE BYTES
    
    Issue:
    -----
    This problem occurs under the following conditions:
    
    a) Stored procedure has a variable is declared as TEXT/BLOB.
    b) Data is copied into the the variable using the
       SELECT...INTO syntax from a TEXT/BLOB column.
    
    Data corruption can occur in such cases.
    
    SOLUTION:
    ---------
    The blob type does not allocate space for the string to be
    stored. Instead it contains a pointer to the source string.
    Since the source is deallocated immediately after the
    select statement, this can cause data corruption.
    
    As part of the fix for Bug #21143080, when the source was
    part of the table's write-set, blob would allocate the
    neccessary space. But this fix missed the possibility that,
    as in the above case, the target might be a variable.
    
    The fix will add the copy_blobs check that was removed by
    the earlier fix.

commit 042ac813f52e97576298dfe9ee01ff5cdf056548
Merge: 2242395 86e3f8e
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Thu Jan 7 14:43:47 2016 +0530

    Bug#21770366 backport bug#21657078 to 5.5 and 5.6
    
    Problem Statement
    =========
    Fix various issues when building MySQL with Visual Studio 2015.
    
    Fix:
    =======
    - Visual Studio 2015 adds support for timespec. Add check for
      this and only use our replacement if timespec is not defined.
    - Rename lfind/lsearch to my* to avoid redefinition problems.
    - Set default value for TMPDIR to "" on Windows as P_tmpdir
      no longer exists.
    - using VS definition of snprintf if available
    - tzname are now renamed to _tzname.
    - This patch raises the minimum version required of WiX Toolkit
      to 3.8, which is required to make MSI packages with
      Visual Studio 2015

commit 86e3f8edde0c1fcb3e910195b95f80684ad0522f
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Thu Jan 7 14:36:19 2016 +0530

    Bug#21770366 backport bug#21657078 to 5.5 and 5.6
    
    Problem Statement
    =========
    Fix various issues when building MySQL with Visual Studio 2015.
    
    Fix:
    =======
    - Visual Studio 2015 adds support for timespec. Add check and
      related code to use this and only use our replacement if
      timespec is not defined.
    - Rename lfind/lsearch to my* to avoid redefinition problems.
    - Set default value for TMPDIR to "" on Windows as P_tmpdir
      no longer exists.
    - using VS definition of snprintf if available
    - tzname are now renamed to _tzname.

commit 224239501ad5a571e0f3fc41bc5c1d65094a7bba
Author: Arun Kuruvila <arun.kuruvila@oracle.com>
Date:   Wed Jan 6 19:41:00 2016 +0530

    Bug #17883203 : MYSQL EMBEDDED MYSQL_STMT_EXECUTE RETURN
                    "MALFORMED COMMUNICATION PACKET" ERROR
    
    Post push to fix valgrind test case failure in pb2.

commit 93a2e4f627748cefbbde530c189327146e752ce7
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Wed Jan 6 08:58:15 2016 +0530

    Bug #22459137: MYSQL 5.6: VALGRIND FAILURES IN PB2 WITH
                   OPEN SSL TRACE
    
    Open ssl has valgrind issues. The relevant stacktraces have
    been added to valgrind.supp.
    
    This is a backport from 5.7:
    2ad2964fe0c254ee77c402b674befaa216f6cf20

commit 14785bdb2a989492719b3cecafbc5b9c0c600589
Merge: 116bab1 b65c01d
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Mon Jan 4 15:46:34 2016 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit b65c01d6a9ee2ccd637ae52fa5bbc7fdf701d8da
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Mon Jan 4 15:31:45 2016 +0530

    Description: yaSSL was only handling the cases of zero or
    one leading zeros for the key agreement instead of
    potentially any number.
    There is about 1 in 50,000 connections to fail
    when using DHE cipher suites.  The second problem was the
    case where a server would send a public value shorter than
    the prime value, causing about 1 in 128 client connections
    to fail, and also caused the yaSSL client to read off the
    end of memory.
    All client side DHE cipher suite users should update.
    Note: The patch is received from YaSSL people

commit 116bab1e51a762e6484d400120187bcfe86bf803
Merge: 36d85dd d6a208a
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Thu Dec 31 07:32:05 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit d6a208adc1275aa005cd3707cd1b39713d288f12
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Thu Dec 31 07:31:12 2015 +0530

    Bug #21564557: INCONSISTENT OUTPUT FROM 5.5 AND 5.6
                   UNIX_TIMESTAMP(STR_TO_DATE('201506', "%Y%M"
    
    Issue:
    -----
    When an invalid date is supplied to the UNIX_TIMESTAMP
    function from STR_TO_DATE, no check is performed before
    converting it to a timestamp value.
    
    SOLUTION:
    ---------
    Add the check_date function and only if it succeeds,
    proceed to the timestamp conversion.
    
    No warning will be returned for dates having zero in
    month/date, since partial dates are allowed. UNIX_TIMESTAMP
    will return only a zero for such values.
    
    The problem has been handled in 5.6+ with WL#946.

commit 36d85dd96766282f144ec8ef68d877573775f356
Merge: f251868 d1678da
Author: Karthik Kamath <karthik.kamath@oracle.com>
Date:   Tue Dec 29 16:10:00 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit d1678da866eb6f543b0cc83990509fa3ba5d3707
Author: Karthik Kamath <karthik.kamath@oracle.com>
Date:   Tue Dec 29 15:58:44 2015 +0530

    BUG#21902059: "CREATE TEMPORARY TABLE SELECT ..." AND BIT(1)
                   COLUMNS
    
    ANALYSIS:
    =========
    A valgrind error is reported when CREATE TABLE .. SELECT
    involving BIT columns triggers a column type redefinition.
    
    In general the pack_flag is set for BIT columns in
    'mysql_prepare_create_table()'. However, during the above
    operation, redefined column types was handled after the
    special handling for BIT columns and thus pack_flag ended
    up not being set correctly triggering the valgrind error.
    
    FIX:
    ====
    The patch fixes this problem by setting pack_flag correctly
    for BIT columns in the case of column type redefinition.

commit f2518688780bea3c89d6bda5229959f63792f959
Author: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com>
Date:   Thu Dec 24 16:39:33 2015 +0530

    Bug#21345391: ALTER TABLE ... CONVERT TO CHARACTER SET NOT EFFECT
                  AND REMAIN A TEMP TABLE
    
    Analysis:
    =========
    ALTER TABLE, CONVERT TO CHARACTER SET operation remains
    ineffective if
    a) Table contains only numerical data types.
    b) Algorithm used is INPLACE.
    Also the temporary '.frm' file created during the
    operation is not cleaned up.
    
    For the above ALTER TABLE operation, appropriate handler
    flag is not set, resulting in a no-op. Hence the operation
    remains ineffective and the CHARACTER SET is not altered.
    Also the temporary '.frm' file created was not cleaned up
    at the end of no-op.
    
    Note: The above operation for tables having character
    data types reports an appropriate error.
    
    Fix:
    ===
    a) Removed the ALTER_CONVERT flag used by parser
       to flag the CONVERT TO CHARACTER SET operation
       since it has similar use as that of ALTER_OPTIONS.
    b) ALTER_OPTIONS is now used to indicate the CONVERT
       TO CHARACTER SET operation as well.
    c) Added code to clean up the temporary '.frm' file
       created during no-op.

commit 25432fa0c008afa4f545225b6f07cfba1837d475
Author: Shaohua Wang <shaohua.wang@oracle.com>
Date:   Tue Dec 22 22:07:13 2015 +0800

    BUG#22385442 - INNODB: DIFFICULT TO FIND FREE BLOCKS IN THE BUFFER POOL
    
    Problem:
    We keep pinning pages in dict_stats_analyze_index_below_cur(),
    but doesn't release these pages. When we have a relative small
    buffer pool size, and big innodb_stats_persistent_sample_pages,
    there will be no free pages for use.
    
    Solution:
    Use a separate mtr in dict_stats_analyze_index_below_cur(),
    and commit mtr before return.
    
    Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
    RB: 11362

commit 8a54f1307e076e1c66786adb1efd7ad9ebd1519c
Author: Aditya A <aditya.a@oracle.com>
Date:   Fri Dec 18 23:54:05 2015 +0530

    Bug#20160327 OPTIMIZE TABLE REMOVES THE DATA DIRECTORY IN PARTITIONS
    
    Test case fix

commit 15a9f9c46a428fa421471d89c0e28c7ee447b61d
Author: Aditya A <aditya.a@oracle.com>
Date:   Thu Dec 17 17:15:20 2015 +0530

    Bug#20160327	OPTIMIZE TABLE REMOVES THE DATA DIRECTORY IN PARTITIONS
    
    Post push failure fix

commit 24b4841ce00ead5cff994a253bd047890619bbc1
Author: Darshan M N <darshan.m.n@oracle.com>
Date:   Wed Dec 16 18:58:53 2015 +0530

    Bug#22016556 INNODB LOOKS FOR BUFFER POOL FILE NAME IN '/' IF
    INNODB_DATA_HOME_DIR IS EMPTY
    
    Issue:
    ======
    If the server is started with the following parameter set in cnf file -
    "innodb_data_home_dir =", to specify absolute paths for the data files
    listed in the innodb_data_file_path value, then the server looks for the
    buffer pool dump file in the root directory and throws an error.
    
    Fix:
    ====
    The directory path of the buffer pool dump file is handled such that the
    server creates and looks for the buffer pool dump file at the right place.
    
    RB: 11081
    Reviewed-by: Satya Bodapati <satya.bodapati@oracle.com>
    Reviewed-by: Jimmy Yang <Jimmy.Yang@oracle.com>

commit 14a59a661e32e6c3f9f156f9bf0f6c8c631bf52e
Author: Aditya A <aditya.a@oracle.com>
Date:   Wed Dec 16 16:38:46 2015 +0530

    Bug#20160327    OPTIMIZE TABLE REMOVES THE DATA DIRECTORY IN PARTITIONS
    
    PROBLEM
    
    mysql-5.6+ uses inplace alter to do optimize table. Inplace alter was
    failing to update the create_info->data_file_name,because of which
    after optimize the ibd file was recreated in default path rather than
    the specified path.
    
    FIX
    
    Update the create_info structure in ha_partition::prepare_inplace_alter_table()
    
    [ Revewied by Kevin and Deb #rb11185 and #rb11267 ]

commit 835d68b100e5bc84e3f1464acfe14d49f3cfecf8
Merge: 9432f3a 8fdcb2b
Author: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com>
Date:   Wed Dec 16 12:09:01 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6
    
    Bug#22361702 - /USR/BIN/MYSQL-SYSTEMD-START DOES NOT RETURN CONTROL TO COMMAND LINE
    
    If the configuration files contains multiple datadir lines, use the last datadir
    entry in the RPM installation scripts

commit 8fdcb2b3fc6f573cad060661be00f7353edaf704
Author: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com>
Date:   Wed Dec 16 12:03:04 2015 +0530

    Bug#22361702 - /USR/BIN/MYSQL-SYSTEMD-START DOES NOT RETURN CONTROL TO COMMAND LINE
    
    If the configuration files contains multiple datadir lines, use the last datadir
    entry in the RPM installation scripts

commit 9432f3a2daee6f76af7ba5ace7c13bfd57fbd8fc
Merge: 4356ed3 dffd85e
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Wed Dec 16 10:51:04 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6
    
    Null Merge

commit dffd85e1e76e5d1fd2baf1fecb62ac172e308dea
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Wed Dec 16 10:48:57 2015 +0530

    Bug#22278455: MYSQL 5.5:RPL_BINLOG_INDEX FAILS IN VALGRIND.
    
    Problem:
    =======
    rpl_binlog_index.test fails with following valgrind error.
    
    line
    Conditional jump or move depends on uninitialised value(s)
    at 0x4C2F842: __memcmp_sse4_1 (in /usr/lib64/valgrind/
    vgpreload_memcheck-amd64-linux.so)
    0x739E39: find_uniq_filename(char*) (log.cc:2212)
    0x73A11B: MYSQL_LOG::generate_new_name(char*, char const*)
    (log.cc:2492)
    0x73A1ED: MYSQL_LOG::init_and_set_log_file_name(char const*,
    char const*, enum_log_type, cache_type) (log.cc:2289)
    0x73B6F5: MYSQL_BIN_LOG::open(char const*, enum_log_type,
    
    
    Analysis and fix:
    =================
    This issue was fixed as part of Bug#20459363 fix in 5.6 and
    above. Hence backporting the fix to MySQL-5.5.

commit 4356ed36c3e641ffc2fbb4894c2bdccd7aee42ba
Merge: 51100e1 6bc30f8
Author: Hery Ramilison <hery.ramilison@oracle.com>
Date:   Mon Dec 7 13:03:54 2015 +0100

    Merge branch 'mysql-5.6.28-release' into mysql-5.6

commit 51100e123401952f74f8a61aef5cac31a2d45ef0
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date:   Sun Dec 6 01:12:24 2015 +0530

    Bug #21762319	ADDING INDEXES ON EMPTY TABLE IS SLOW
    		WITH LARGE INNODB_SORT_BUFFER_SIZE.
    Problem:
    =======
    Adding index on empty table is slow when innodb_sort_buffer_size
    is large.
    
    Fix:
    ====
    Delay the temporary file creation for alter table operation
    will lead to avoid the file creation for empty table.
    
    Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
    RB: 11200

commit d8b50c2b8ed01d4b840dd409a381c6b82f35b7f8
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Fri Dec 4 18:55:38 2015 +0530

    Bug #22216715 IN-CONSISTENT ERROR MESSAGES NOTICED WITH SSL AND WITHOUT SSL UNDER
    
    Post push changes

commit 8f6e4f5a9832b1b3905e1caccf474d9c7371abf8
Author: Shishir Jaiswal <shishir.j.jaiswal@oracle.com>
Date:   Fri Dec 4 17:18:14 2015 +0530

    Bug#21631855: HANDLE_FATAL_SIGNAL (SIG=11) IN FT_BOOLEAN_CHECK_SYNTAX_STRING |
                  FT_PARSER.C:92
    
    DESCRIPTION
    ===========
    Starting the server with default charset as 'utf16le', and
    setting system variable "ft_boolean_syntax" results in
    server crash.
    
    ANALYSIS
    ========
    Function "ft_boolean_check_syntax_string()" uses the MACRO
    "my_isalnum" which checks for the member "ctype" of the
    default charset. Since this member is NULL (for charset
    'utf16le') therefore when accessed, results in SEGFAULT!
    
    We can prevent it altogether by replacing "my_isalnum"
    (which depends on the default charset) with C's "isalnum()"
    . Using "isalnum()" here is sufficient as the input to this
    function is guaranteed to be in ASCII format.
    
    FIX
    ===
    Replaced user defined "my_isalnum" with C's "isalnum()"
    in the function "ft_boolean_check_syntax_string()"

commit cdc72eb113d93b9148c92cdfe84859f8644a3f6e
Author: Shaohua Wang <shaohua.wang@oracle.com>
Date:   Wed Dec 2 12:14:52 2015 +0800

    BUG#22291765 INSERT A TOKEN OF 84 4-BYTES CHARS INTO FTS INDEX
                 CAUSES SERVER CRASH
    
    we allow max token size up to 84 in both MyISAM and InnoDB, but
    we suppose max multiple-bytes char length is 3 bytes, which is
    not true. We support 4 bytes chars, e.g. in utf8mb4. So inserting
    a token of 84 4-bytes chars will cause server crash.
    
    Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
    Reviewed-by: Xing Zhang <xing.z.zhang@oracle.com>
    RB: 11210

commit 53ed26cca73cddf1bedb5fff2ab45aa7428af3e0
Author: Arun Kuruvila <arun.kuruvila@oracle.com>
Date:   Wed Dec 2 11:37:18 2015 +0530

    Bug #17883203 : MYSQL EMBEDDED MYSQL_STMT_EXECUTE RETURN
                    "MALFORMED COMMUNICATION PACKET" ERROR
    
    Description :- C API, "mysql_stmt_execute" fails with an
    error, "malformed communication packet", even for a simple
    query when prepared statments are used with libmysqld.
    
    Analysis :- The packet size specified in
    "emb_stmt_execute()" [libmysqld/lib_sql.cc] and "execute()
    [libmysql/libmysql.c] should be consistent across libraries
    (libmysqld/libmysql) because "mysql_stmt_execute()"
    [sql/sql_prepare.cc ] is being called from both functions
    depending upon the libaries (libmysqld/libmysql) used.
    Currently the packet size used in "emb_stmt_execute() is 5
    and in "execute()" is 9. When the C API,
    "mysql_stmt_execute", is executed from an application which
    is linked with libmysqld, it fails in the function
    "mysql_stmt_execute()" because of incorrect packet size.
    Another bug also exists in the "Protocol::net_store_data()"
    [libmysqld/lib_sql.cc] due to dereferencing an undefined
    "next_field" pointer which results in a segmentation fault.
    
    Fix:-
    (a)The packet size is made consistent across libmysqld
    and libmysql.
    (b) For the problem found internally:
    Functions "prepare_for_resend(), "net_store_data()" (with
    and without charset conversion) are defined seperately for
    Protocol_binary class in case of embedded library.

commit 6b5f401dcbb1ab38ad9224b7df8aa17283a55ba7
Author: Shaohua Wang <shaohua.wang@oracle.com>
Date:   Wed Dec 2 10:16:40 2015 +0800

    BUG#21922532 SERVER CRASH WITH FTS QUERY IN HIGH CONCURRENCY
    
    The root cause is that we access memory 'in_fts_query' of a MYSQL TABLE
    after it's closed.
    
    The solution is removing 'in_fts_query' reset in innobase_fts_close_ranking().
    because we have already reset the flag when closing TABLE before cleanup.
    
    Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
    RB: 11183

commit 31480c0cc6854d95b918bf9335bee58221d18820
Merge: 06ba430 5705fd9
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Tue Dec 1 15:38:37 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 5705fd9b642db8cf58d9105314905c191a3cab9f
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Tue Dec 1 15:38:11 2015 +0530

    Bug#21205695	DROP TABLE MAY CAUSE SLAVES TO BREAK
        Problem:
        ========
        1) Drop table queries are re-generated by server
        before writing the events(queries) into binlog
        for various reasons. If table name/db name contains
        a non regular characters (like latin characters),
        the generated query is wrong. Hence it breaks the
        replication.
        2) In the edge case, when table name/db name contains
        64 characters, server is throwing an assert
        assert(M_TBLLEN < 128)
        3) In the edge case, when db name contains 64 latin
        characters, binlog content is interpreted badly
        which is leading replication failure.
    
        Analysis & Fix :
        ================
        1) Parser reads the table name from the query and converts
        it to standard charset(utf8) and stores it in table_name variable.
        When drop table query is regenerated with the same table_name
        variable, it should be converted back to the original charset
        from standard charset(utf8).
    
        2) Latin character takes two bytes for each character. Limit
        of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'.
        So there is a possiblity that tablename/dbname contains 3 * 64.
        Hence assert is changed to
        (M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN)
    
        3) db_len in the binlog event header is taking 1 byte.
           db_len is ranged from 0 to 192 bytes (3 * 64).
           While reading the db_len from the event, server
           is casting to uint instead of uchar which is leading
           to bad db_len. This problem is fixed by changing the
           cast type to uchar.

commit 06ba43030b42962457a7ad53d345b30d289fe1f2
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Mon Nov 30 16:47:08 2015 +0530

    Bug #22216715 IN-CONSISTENT ERROR MESSAGES NOTICED WITH SSL AND WITHOUT SSL UNDER UCS2 CHARSET
    
    Description: In-consistent error messages noticed when we use
    ssl and non-ssl connections with unsupported client
    charsets(like "utf32","utf16","ucs2")
    
    Analysis: When the client sends the initial packet for the
    connection setup, it will send the client charset number
    along with the protocol packet. The server receives and
    parses it through parse_client_handshake_packet(). In that
    before establishing the SSL connection it will try to
    validate the client charset and notices that the client
    charset is not supported and returns the error. But since
    the connection request is sent by the ssl client it will
    receive the charset error message and parses it in his
    own way and gives the error as
    "protocol version mismatch[2026]".
    
    Fix: Moved the client charset validation function
    init_client_charset(), after the ssl connection handshake.

commit 37cc25391236054b08d1d92737736c5d9b26cdc1
Author: Joao Gramacho <joao.gramacho@oracle.com>
Date:   Sat Nov 28 12:36:45 2015 +0000

    BUG#22245619 SERVER ABORT AFTER SYNC STAGE OF THE COMMIT FAILS
    
    This is a post-push fix.
    
    Reverted the previous changes and made the binary log sync to use the
    MY_IGNORE_BADFD flag when calling mysql_file_sync(). This flag will make
    the sync procedure to ignore bad file descriptor errors, so the issue
    will not be considered an error anymore.

commit ce7bf820dfccc8dda3ccd74ab544d9731734c9d9
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Fri Nov 27 18:01:33 2015 +0530

    Bug#21317739: APPLYING CREATE TEMPORARY TABLE SQL ON A
    SLAVE WITH REPLICATE-REWRITE-DB FAILS
    
    Fixing a post push test issue.

commit a535f68440d9b980fb30164a78bc17de8db5ea23
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Thu Nov 26 18:40:34 2015 +0530

    Bug#21317739: APPLYING CREATE TEMPORARY TABLE SQL ON A SLAVE
    WITH REPLICATE-REWRITE-DB FAILS
    
    Problem:
    =======
    As part of fix for BUG#16290902, at the time of writing the
    'DROP TEMPORARY TABLE IF EXISTS' query into the binlog, the
    query will not be preceded by a 'use `db`' statement. The
    query will have a fully qualified table name.
    
    Eg:
    'USE `db`; DROP TEMPORARY TABLE IF EXISTS `t1`;'
    will be logged as
    'DROP TEMPORARY TABLE IF EXISTS `db`.`t1`;'.
    
    Because of this change application of 'replicate-rewrite-db'
    filter rule will fail on slave, as it works only on default
    database specified in 'use' statement. This causes slave to
    break when the 'CREATE TEMPORARY TABLE' is re-executed on
    slave.
    
    Analysis:
    ========
    The intention of BUG#16290902 fix was to address a specific
    scenario where the default database does not exist on the
    slave but in spite of that DROP TEMPORARY TABLE IF EXITS
    query will be binlogged with 'USE default_db' statement.
    Which causes point in time recovery to fail when user uses
    Slave's binary log to re-apply. But the fix that was more
    generic. It completely removed 'USE default_db' for DROP
    TEMPORARY TABLE IF EXITS queries even when they have their
    default databases present. Hence the scope of the fix should
    have been narrowed.
    
    
    Fix:
    ===
    At the time writing 'DROP TEMPORARY TABLE IF EXISTS' query
    into the binary log, check if the default database exists.
    If it exists then write 'USE default_db' in the binary log.
    If default database is not present then log the query with
    qualified table name.

commit 81bd49d30b63d0e28786a69374835c8b00ea7c6e
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Thu Nov 26 21:49:00 2015 +0530

    Bug #19688135 ASYMMETRIC_ENCRYPT: UNINITIALIZED VALUE WHEN CHECKING FOR "PRIVATE KEY"
    Description:
    Valgrind warning "Conditional jump or move depends on uninitialised value(s)" noticed when
    a encrypt function is executed along with concat() function having a null string(i.e concat('',a))
    
    Analysis:
    In Item_func_concat::val_str() when we have "res2->length() == 0" we just "continue".
    And 'Ptr' string is not having the null terminated character. Hence valgrid is reporting the
    Warning(like conditional jump .... ) when the 'Ptr' string is used in function asymmetric_encrypt()
    through strstr() api. Where as in case of res2->length() > 0 we call res->append(*res2) and
    we append '0' to the 'Ptr' string. So, valgrind is happy that the string is properly null terminated.
    
    Also we also noticed that get_algo() and get_digest_mode() has similar issues.
    
    Fix:
    used my_memmem() which has size of the string arguments, instead of strstr().
    And to fix get_algo(), get_digest_mode() issues we null terminated the input string because my_memmem will work only for case sensitive strings.

commit 324a19b85684d0bbb92545b48e666b9caa05cd01
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Thu Nov 26 20:11:38 2015 +0530

    BUG#19286708 REPLICATION BROKEN AFTER CREATION OF SCHEDULED EVENTS
    
    Problem: In mixed replication, events that has unsafe functions
    (sysdate()) cannot be created.
    
    Analysis: DDL Statements (that change metadata) are always
    getting written in the binlog in STATEMENT format irrespective
    of binlog_format settings. And the changes to the metadata
    should not be replicated as the same statement when it is executed
    on Slave will update the metadata on slave side. Event based SQL
    commands (CREATE EVENT, ALTER EVENT and DROP EVENT) belong to
    the same category. Code was written to take care of changing the
    current_statement_binlog_format into 'statement' if it is 'row'
    in case of executing/replicating such statements. But in case of
    create event and alter event, after we are converting row format
    to statement format, while we are opening the tables server decides
    the binlogging format (in decide_binlog_format()) and the logic
    again changes binlog format to "row" if it sees the statement is
    unsafe and we are using mixed format.
    
    Fix: Reset the thd.variables.binlog_format to 'statement' before create/alter
    events work when we are clearning current_statement_binlog_format.
    And set it back to the original value at the end of the work.

commit f1b8a70aa6ed2add67952e48ae7043ed535c8e7a
Author: Jon Olav Hauglid <jon.hauglid@oracle.com>
Date:   Thu Nov 26 13:52:51 2015 +0100

    Bug#22194831 INSTALL-SOURCE AND INSTALL-WIN-SOURCE CONTAIN OUTDATED INFORMATION
    
    Post-push fix: Remove references to INSTALL-BINARY

commit bb6430fe864191b8530208e0c319bb9918d86351
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Thu Nov 26 11:17:31 2015 +0530

    Bug#21276561:FAILURE TO GENERATE GTID LEADS TO INCONSISTENCY
    
    Fixing a post push test issue.
    
    POST PUSH TEST ISSUE:
    ====================
    
    binlog_gtid_exhausted.test has failed in weekly-trunk with
    following symptoms.
    
    -ERROR HY000: Binary logging not possible. Message: An error
    occurred during flush stage of the commit.
    'binlog_error_action' is set to 'ABORT_SERVER'. Hence
    aborting the server.
    +ERROR HY000: Binary logging not possible. Message: Hence
    aborting the server.
    
    Analysis:
    ========
    
    Existing code uses 'errmsg' char array as both source and
    destination within a 'sprintf' statement.
    
    As per the documentation, If copying takes place between
    objects that overlap as a result of a call to sprintf() or
    snprintf(), the results are undefined.
    
    This might be the reason for the above failure.
    
    Hence added a new destination buffer to hold the resulting
    error string.
    
    In addition to that added a new 'master.opt' file with
    '--skip-core-file'.

commit 37c9489e50f04694b603dcb53e67861067b08bb7
Author: Erlend Dahl <erlend.dahl@oracle.com>
Date:   Tue Nov 24 15:13:27 2015 +0100

    Bug#22194831 INSTALL-SOURCE AND INSTALL-WIN-SOURCE CONTAIN OUTDATED INFORMATION
    
    Follow-up patch: remove Docs/INSTALL-BINARY, the information has been moved to INSTALL.

commit 7a58b6ac836c4c983a4fab60cc55618c43d70d65
Author: Joao Gramacho <joao.gramacho@oracle.com>
Date:   Wed Nov 25 10:16:39 2015 +0000

    BUG#22245619 SERVER ABORT AFTER SYNC STAGE OF THE COMMIT FAILS
    
    Problem:
    
    The binary log group commit sync is failing when committing a group of
    transactions into a non-transactional storage engine while other thread
    is rotating the binary log.
    
    Analysis:
    
    The binary log group commit procedure (ordered_commit) acquires LOCK_log
    during the #1 stage (flush). As it holds the LOCK_log, a binary log
    rotation will have to wait until this flush stage to finish before
    actually rotating the binary log.
    
    For the #2 stage (sync), the binary log group commit only holds the
    LOCK_log if sync_binlog=1. In this case, the rotation has to wait also
    for the sync stage to finish.
    
    When sync_binlog>1, the sync stage releases the LOCK_log (to let other
    groups to enter the flush stage), holding only the LOCK_sync. In this
    case, the rotation can acquire the LOCK_log in parallel with the sync
    stage.
    
    For commits into transactional storage engine, the binary log rotation
    checks a counter of "flushed but not yet committed" transactions,
    waiting until this counter to be zeroed before closing the current
    binary log file.  As the commit of the transactions happen in the #3
    stage of the binary log group commit, the sync of the binary log in
    stage #2 always succeed.
    
    For commits into non-transactional storage engine, the binary log
    rotation is checking the "flushed but not yet committed" transactions
    counter, but it is zero because it only counts transactions that
    contains XIDs. So, the rotation is allowed to take place in parallel
    with the #2 stage of the binary log group commit. When the sync is
    called at the same time that the rotation has closed the old binary log
    file but didn't open the new file yet, the sync is failing with the
    following error: 'Can't sync file 'UNOPENED' to disk (Errcode: 9 - Bad
    file descriptor)'.
    
    Fix:
    
    For non-transactional only workload, binary log group commit will keep
    the LOCK_log when entering #2 stage (sync) if the current group is
    supposed to be synced to the binary log file.

commit 4cde35096e9b214e0b858d68bd7a4793dfb7506b
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Mon Nov 23 21:43:02 2015 +0530

    Bug #20201006 spamming show processlist prevents old
    connection threads from cleaning up.
    
    Analysis
    =========
    Issue here is, delay in connection cleanup for which global
    connection counter is already decremented, makes room for
    new connections. Hence more than expected connections are
    observed in the server.
    
    Connections running statement "SHOW PROCESSLIST" or "SELECT
    on INFORMATION_SCHEMA.PROCESSLIST" acquires mutex
    LOCK_thd_remove for reading information of all the connections
    in server. Connections in cleanup phase, acquires mutex to
    remove thread from global thread list. Many such concurrent
    connections increases contention on mutex LOCK_thd_remove.
    
    In connection cleanup phase, connection count is decreased
    first and then the thd is removed from global thd list. This
    order makes new connection (above max_connections) possible
    while existing connections removal is still pending because
    of mutex LOCK_thd_remove being held by show processlist.
    
    Fix:
    =====
    In connection clean phase, thd is removed from the global
    thd list first and then global connection count is
    decremented. Added code to wait for connection_count be
    be zero during shutdown to ensure connection threads are
    done with their task.

commit 73c9b7335eaa873cdc0ec195fc1e9297f09c745b
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Mon Nov 23 09:32:50 2015 +0530

    Bug#21486161: FLUSH_CACHE_TO_FILE CALLS _EXIT WHEN
    ABORT_SERVER.
    
    Added '--skip-core-file' option to
    binlog_error_action-master.opt file to avoid generation of
    core files.

commit ca169a7e2e549dc17873004f7a9121b2aa5684df
Merge: 3d7342c 851d144
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Sat Nov 21 11:13:36 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 851d1440e9c4120556d091cadcd49c7fc5b15906
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Sat Nov 21 11:08:44 2015 +0530

    Bug #17047208	REPLICATION DIFFERENCE FOR MULTIPLE TRIGGERS
    
    Fixing pb2 valgrind failure
    Missed a 'if condition' check while moving the logic
    from one place to another place.

commit 3d7342c6705bd4bf7f4e798e728ec4bfecbd42a8
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date:   Fri Nov 20 15:31:04 2015 +0530

    Bug #19183565 CREATE DYNAMIC INNODB_TMPDIR VARIABLE TO CONTROL
    		WHERE INNODB WRITES TEMP FILES
    
    	- Post push fix to address formatting issue.

commit 6cd9b7554921fbc125387108420ade658a82f99a
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date:   Fri Nov 20 15:05:12 2015 +0530

    Bug #19183565 CREATE DYNAMIC INNODB_TMPDIR VARIABLE TO CONTROL
    		WHERE INNODB WRITES TEMP FILES
    
    	- Post push fix to address pb2 failure.

commit 863c81e92ea021f28fffbae436928d059c6786fe
Merge: d6dbbfe 2590c93
Author: Chaithra Gopalareddy <chaithra.gopalareddy@oracle.com>
Date:   Fri Nov 20 12:31:43 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 2590c93eb604ba07867d38718f21cb3f09f0c75d
Author: Chaithra Gopalareddy <chaithra.gopalareddy@oracle.com>
Date:   Fri Nov 20 12:30:15 2015 +0530

    Bug#19941403: FATAL_SIGNAL(SIG 6) IN BUILD_EQUAL_ITEMS_FOR_COND | IN SQL/SQL_OPTIMIZER.CC:1657
    
    Problem:
    At the end of first execution select_lex->prep_where is pointing to
    a runtime created object (temporary table field). As a result
    server exits trying to access a invalid pointer during second
    execution.
    
    Analysis:
    While optimizing the join conditions for the query, after the
    permanent transformation, optimizer makes a copy of the new
    where conditions in select_lex->prep_where. "prep_where" is what
    is used as the "where condition" for the query at the start of execution.
    W.r.t the query in question, "where" condition is actually pointing
    to a field in the temporary table. As a result, for the  second
    execution the pointer is no more valid resulting in server exit.
    
    Fix:
    At the end of the first execution, select_lex->where will have the
    original item of the where condition.
    Make prep_where the new place where the original item of select->where
    has to be rolled back.
    Fixed in 5.7 with the wl#7082 - Move permanent transformations from
    JOIN::optimize to JOIN::prepare
    
    Patch for 5.5 includes the following backports from 5.6:
    
    Bugfix for Bug12603141 - This makes the first execute statement in the testcase
    pass in 5.5
    
    However it was noted later in in Bug16163596 that the above bugfix needed to
    be modified. Although Bug16163596 is reproducible only with changes done for
    Bug12582849, we have decided include the fix.
    
    Considering that Bug12582849 is related to Bug12603141, the fix is
    also included here. However this results in Bug16317817, Bug16317685,
    Bug16739050. So fix for the above three bugs is also part of this patch.

commit d6dbbfe615cb3dc2fcc951bf004a6d999d0bd121
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Fri Nov 20 12:09:52 2015 +0530

    BUG#21253415 MULTIPLE DROP TEMP TABLE STATEMENTS IN SF CAUSE REPLICATION
     FAILS USING 5.6 GTID
     Problem: When there are more than one drop temp table in stored function
              or trigger, replication is failing when GTIDs are enabled.
    
     Analysis: In ROW based replication format, even though CREATE TEMPORARY
               TABLE query is not replicated, DROP TEMPORARY TABLE queries
               are replicated to achieve proper clean up on Slave end (CREATE
               TEMPORARY TABLE query would have executed and replicated when
               the replication format is STATEMENT) by adding 'IF EXISTS'
               clause. When DROP TEMPORARY TABLE query is in a stored function
               along with some DML statements, the binlog equivalent query
               for that function execution will look like
               BEGIN
                 DROP TEMP TABLE ...
                 ROW EVENT FOR DML 1
                 ROW EVENT FOR DML 2
               END
               But when GTIDs are enabled, it is documented that CREATE/DROP
               TEMPORARY TABLE queries are not allowed in Multi Statement
               Transactions because half executed gtid transactions (rolled
               back of these transactions) can leave these temporary tables
               in a bad state.
               In the old code, one DROP TEMPORARY TABLE in a function is
               working fine because the 'DROP TEMP TABLE' is going into
               STMT_CACHE (which does not be wrapped with BEGIN/COMMIT).
               //STMT_CACHE
               GTID_EVENT
               DROP TEMP TABLE ...
               //TRANS_CACHE
               GTID_EVENT
               BEGIN
                 ROW EVENT FOR DML 1
                 ROW EVENT FOR DML 2
               END
    
               But if the function contains two 'DROP TEMP TABLE's, both
               of them are going into 'STMT_CACHE' (which does not be wrapped
               with BEGIN/COMMIT) and STMT_CACHE with one gtid_event cannot
               accommodate two separate DROP TEMP TABLE queries. And with above
               Multi Statement Transactions + GTID restriction, we cannot
               add 'BEGIN/COMMIT'.
    
    Fix:  Stored functions and Triggers are also considered as another form of
          Multi Statement Transactions across the server. To maintain gtid
          consistency and to avoid the problems that are mentioned in this bug
          scenario,  CREATE/DROP temp tables are disallowed from stored functions
          and triggers also just like how they were restricted in Multi Statement
          Transactions. Now function execution that has CREATE/DROP TEMP TABLES
          will throw ER_GTID_UNSAFE_CREATE_DROP_TEMPORARY_TABLE_IN_TRANSACTION.
          ("When @@GLOBAL.ENFORCE_GTID_CONSISTENCY = 1, the statements CREATE
            TEMPORARY TABLE and DROP TEMPORARY TABLE can be executed in a
            non-transactional context only, and require that AUTOCOMMIT = 1. These
            statements are not allowed in Functions or Triggers also as they are also
            considered as Multi Statement transaction.)

commit dd2abff64a37627add89dd66269139bc416c58b7
Merge: c2e32ca 88f397d
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Fri Nov 20 05:41:32 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 88f397d1715c8b7e8224916060fb43704e233349
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Fri Nov 20 05:40:39 2015 +0530

    Bug #22214867: MYSQL 5.5: MAIN.SUBSELECT AND OTHERS FAIL
                   WITH NEW VALGRIND
    
    Issue:
    ------
    Function signature in valgrind.supp requires a change with
    valgrind 3.11. Static function is replaced with wild card.

commit c2e32cab82817cc3367235915e6207e9d5c773f8
Author: Erlend Dahl <erlend.dahl@oracle.com>
Date:   Thu Nov 19 16:32:43 2015 +0100

    Revert "Bug #19688135 ASYMMETRIC_ENCRYPT: UNINITIALIZED VALUE WHEN CHECKING FOR "PRIVATE KEY""
    
    This reverts commit 6f92f1c1e57a0ec41a6a12631eef23deecbba843.
    This reverts commit 7f9c073150e49e4626637bdbf9d540dd073403db.

commit 6f92f1c1e57a0ec41a6a12631eef23deecbba843
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Thu Nov 19 19:28:05 2015 +0530

    Bug #19688135 ASYMMETRIC_ENCRYPT: UNINITIALIZED VALUE WHEN CHECKING FOR "PRIVATE KEY"
    
    Post push changes.
    Missed the closing braces.

commit f65528633f4178b82f70c212714f78a2e9302a92
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Thu Nov 19 16:53:20 2015 +0530

    Bug #17618162 SSL CONNECTION NOT CONSIDERING THE VALUE SPECIFIED BY MYSQL_OPT_READ_TIMEOUT
    
    Description: SSL connection not considering the value
    specified by MYSQL_OPT_READ_TIMEOUT. Connect the client
    to server with MYSQL_OPT_READ_TIMEOUT lesser
    (i.e like 5 seconds) then the mysql_query() execution
    time(i.e in mysql_query() have query as "SELECT SLEEP(10)").
    For the above scenario we you get "Lost connection to MySQL
    server during query" error. And this is happening in non-ssl
    case. But in SSL connection case we are getting the query
    execution is successful.
    
    Analysis: When we set the MYSQL_OPT_READ_TIMEOUT as '5'
    under SSL connection, we call my_net_set_read_timeout()
    in mysql_real_connect(i.e CLI_MYSQL_REAL_CONNECT) and
    converts the vio->read_timeout to milliseconds and further
    we call run_plugin_auth() and further so on to ssl_do(),
    there we will make new vio and copy the timeout values to
    it. But here we have the timeouts already in milliseconds.
    And we are passing those milliseconds(as seconds) to
    vio_timeout(). Thus further those will be converted to
    incorrect milliseconds which leads to having 15000000
    milliseconds instead of 15000 milliseconds in case of SSL
    connections. Hence the read_timeout for the query is more
    and we are getting the results successfully.
    
    Fix: Since the updated value of vio->read_timeout and
    vio->write_timeout value always be in milliseconds, we
    are converting the milliseconds value of vio->read_timeout
    in vio_reset to seconds and calling vio_timeout().

commit 7f9c073150e49e4626637bdbf9d540dd073403db
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date:   Thu Nov 19 16:31:16 2015 +0530

    Bug #19688135 ASYMMETRIC_ENCRYPT: UNINITIALIZED VALUE WHEN CHECKING FOR "PRIVATE KEY"
    
    Description:
    Valgrind warning "Conditional jump or move depends on uninitialised value(s)" noticed when
    a encrypt function is executed along with concat() function having a null string(i.e concat('',a))
    
    Analysis:
    In Item_func_concat::val_str() when we have "res2->length() == 0" we just "continue".
    And 'Ptr' string is not having the null terminated character. Hence valgrid is reporting the
    Warning(like conditional jump .... ) when the 'Ptr' string is used in function asymmetric_encrypt()
    through strstr() api. Where as in case of res2->length() > 0 we call res->append(*res2) and
    we append '0' to the 'Ptr' string. So, valgrind is happy that the string is properly null terminated.
    
    Also we also noticed that get_algo() and get_digest_mode() has similar issues.
    
    Fix:
    used my_memmem() which has size of the string arguments, instead of strstr().
    And to fix get_algo(), get_digest_mode() issues we null terminated the input string because my_memmem will work only for case sensitive strings.

commit 55ee9a89bb842fa0278b093780a425e605c52e3d
Author: Sergey Glukhov <sergey.glukhov@oracle.com>
Date:   Thu Nov 19 13:37:31 2015 +0300

    Bug#22173419 VALGRIND FAILURE IN MAIN.GROUP_MIN_MAX
    
    Query below produces valgrind warning:
    
    select a1,a2,b,min(c) from t2 where
    ((a1 > 'a') or (a1 < '9')) and (a2 >= 'b') and (b = 'a') and (c < 'h112')
    group by a1,a2,b;
    
    Problematic expession is (c < 'h112').
    SEL_ARG::min_range is is_null_string which has length=2, memcmp
    is executed with the length which is bigger than is_null_string.
    
    The fix: Do not perform comparison if one of the argument is NULL value.

commit c57bd769995cb109764bd6dce0466dad838c7403
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date:   Thu Nov 19 15:19:23 2015 +0530

    Bug #19183565 CREATE DYNAMIC INNODB_TMPDIR VARIABLE TO CONTROL
    		WHERE INNODB WRITES TEMP FILES
    
    Problem:
    ========
    InnoDB creates temporary files for online ALTER statements in the tmpdir.
    In some cases, the tmpdir is too small, or for other reasons, not the best
    choice.
    
    Solution:
    =========
    Create a new dynamic session variable "innodb_tmpdir"
    that would determine where the temp files should create during alter
    operation.
    
    Behaviour of innodb_tmpdir :
    ===========================
    1) Default value is NULL.
    2) Valid inputs are any path other than mysql data directory path.
    3) Directory Permission and existence checked as a part of validation for
       innodb_tmpdir.
    4) If value is set to NULL, then temporary file created in the location of
       mysql server variable(--tmpdir).
    5) User should have file privilege to set the variable.
    6) If user provides a path which is symlink, then we resolve and store
       absolute path in innodb_tmpdir.
    7) Path should not exceed 512 bytes.
    8) Path should be a directory.
    
    Reviewed-by: Marko Mäkelä<marko.makela@oracle.com>
    Reviewed-by: Harin Vadodaria<harin.vadodaria@oracle.com>
    Reviewed-by: Jon Olav Hauglid<jon.hauglid@oracle.com>
    RB: 7628

commit 8da4ada6ff2dfab3fdf3909bf9bb02c7f0f48631
Merge: 1c6cb97 b2dcd42
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Thu Nov 19 13:59:55 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit b2dcd4269e203ab88c744f5b00a50f5ce17ff7ae
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Thu Nov 19 13:59:27 2015 +0530

    Bug#17047208        REPLICATION DIFFERENCE FOR MULTIPLE TRIGGERS
    
        Problem & Analysis: If DML invokes a trigger or a
        stored function that inserts into an AUTO_INCREMENT column,
        that DML has to be marked as 'unsafe' statement. If the
        tables are locked in the transaction prior to DML statement
        (using LOCK TABLES), then the same statement is not marked as
        'unsafe' statement. The logic of checking whether unsafeness
        is protected with if (!thd->locked_tables_mode). Hence if
        we lock the tables prior to DML statement, it is *not* entering
        into this if condition. Hence the statement is not marked
        as unsafe statement.
    
        Fix: Irrespective of locked_tables_mode value, the unsafeness
        check should be done. Now with this patch, the code is moved
        out to 'decide_logging_format()' function where all these checks
        are happening and also with out 'if(!thd->locked_tables_mode)'.
        Along with the specified test case in the bug scenario
        (BINLOG_STMT_UNSAFE_AUTOINC_COLUMNS), we also identified that
        other cases BINLOG_STMT_UNSAFE_AUTOINC_NOT_FIRST,
        BINLOG_STMT_UNSAFE_WRITE_AUTOINC_SELECT, BINLOG_STMT_UNSAFE_INSERT_TWO_KEYS
        are also protected with thd->locked_tables_mode which is not right. All
        of those checks also moved to 'decide_logging_format()' function.

commit 1c6cb9701276f984a48fd8f8b0852710e7140672
Merge: 094c51e 8e2a9f3
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Wed Nov 18 08:04:51 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 8e2a9f33f1c61b5b4b9eead22e64fd1d2b4ecaa5
Author: Sreeharsha Ramanavarapu <sreeharsha.ramanavarapu@oracle.com>
Date:   Wed Nov 18 08:04:04 2015 +0530

    Bug #22214852: MYSQL 5.5 AND 5.6: MAIN.KEY AND OTHER
                   FAILURE WITH VALGRIND FOR RELEASE BUILD
    
    Issue:
    ------
    Initialization of variable with UNINIT_VAR is flagged by
    valgrind 3.11.
    
    SOLUTION:
    ---------
    Initialize the variable to 0.
    
    This is a backport of Bug# 14580121.

commit 094c51ec137d830625025b83e7ae89b580b7fc1b
Author: Erlend Dahl <erlend.dahl@oracle.com>
Date:   Thu Nov 12 21:47:15 2015 +0100

    Bug#22194831 INSTALL-SOURCE AND INSTALL-WIN-SOURCE CONTAIN OUTDATED INFORMATION
    
    Removed two files that contained duplicate information:
    INSTALL-WIN-SOURCE and BUILD-CMAKE.
    
    Renamed INSTALL-SOURCE to INSTALL.
    
    Updated the information with correct links for different versions.
    
    Approved-by: Jon Hauglid <jon.hauglid@oracle.com>
    Approved-by: Terje Rosten <terje.rosten@oracle.com>

commit cd3a7d92b7fd8cda5aeea554f0ca91ba1b31e41a
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date:   Tue Nov 17 14:50:49 2015 +0530

    Bug#21229951 VARIABLES IN ALTER EVENT NOT REPLICATED PROPERLY
    Problem: SP local variables that are used in ALTER EVENT
    are not replicated properly.
    
    Analysis: CALL statements are not written into binary log. Instead
    each statement executed in SP is binlogged separately with the exception
    that we modify query string: we replace uses of SP local variables with
    NAME_CONST('spvar_name', <spvar-value>) calls. The code was written
    under the assumption that this replacement was not required for at all
    for all the queries in 'row' based format. But it can happen that DDLs
    (which are always binlogged in statement mode irrespective of binlog
    format) can also use SP local variables and they suffer due to the
    above assumption. 'ALTER EVENT' in this bug case is one such cases
    
    Fix: Any SP local variables used in a query should
    be replaced with NAME_CONST(...) except for the case when it is DML
    query and binlog format is 'ROW'.

commit 3d99c59d258d0f8622f26fe76956b718d838e329
Author: Mithun C Y <mithun.c.y@oracle.com>
Date:   Tue Nov 17 14:30:51 2015 +0530

    Bug #19479836: TEST: UP_MULTI_DB_TABLE, SUITE: ENGINES/FUNCS FAILS ON 5.6 PB2.
    
    Issue:
    ======
    Change in join order in multi update statement can produce
    different results. main reason bug 16767011 and 18449085.
    But these fixes are only made on 5.7 back-porting same
    thing on 5.6 is little risky as considering to use temp
    table for first table in join order is done before conds
    are rearranged and pre-processed in make_join_select.
    Where as in 5.7+ such decision happens after
    make_join_select. This particular change from 5.7 to 5.6
    appears risky hence not back-porting the same.
    
    Solution:
    =========
    Re-writing the tests in 5.6 so it produces consistent
    plan (join orders). This solves sporadic failures.

commit 020f3f8081c2ec7881f88b77760d65e0dd4c8b12
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Mon Nov 16 15:43:00 2015 +0530

    Bug#21276561: FAILURE TO GENERATE GTID LEADS TO
    INCONSISTENCY
    
    Problem:
    =======
    If generating a GTID for a transaction fails, the
    transaction is not written to the binary log but still gets
    committed, which potentially leads to master/slave data
    inconsistency.
    
    Analysis:
    ========
    Running out of GTID numbers is a very rare scenario, but if
    that happens that will be a fatal error and we advise users
    to 'Restart the server with a new server_uuid'.
    
    GTID's are generated at the time of flushing binlog cache
    to actual binary log. In the existing code if generation
    of GTID fails, client will get an appropriate error message
    and the action specified as per the binlog_error_action
    will be taking place as this is a flush stage error. i.e
    if the user chose binlog_error_action to be 'IGNORE_ERROR',
    then binlog will be disabled and transactions will continue
    to get committed on master and this will lead to an
    inconsistent slave. If the user chose binlog_error_action
    to be 'ABORT_SERVER' then server will abort without
    commiting the transaction on master so it will be consistent
    with slave. This behavior is the most appropriate one. At
    present while displaying the error message in error log, it
    is displayed as 'sync' stage error but it should be a
    'flush' stage error.
    
    Fix:
    ===
    During flush stage if generation of GTID fails then we set
    thd->commit_error= THD::CE_FLUSH_ERROR, so that the error
    message is accurate.

commit 96e90076b9a76b631b4e849af982cc7ab6d1ef10
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date:   Fri Nov 13 12:22:30 2015 +0530

    Bug#21630907: MISSING DATA DETECTED WHEN MAX_BINLOG_SIZE
    SMALLER ON SLAVE FOR 3 NODE TOPOLOGY
    
    Bug#21053163: MIXED BASED REPLICATION LOOSES EVENTS WHEN
    RELAY_LOG_INFO_REPOSITORY=TABLE
    
    Problem:
    =======
    2 level replication M1 -> S1 ->S2 ( S1 is slave of M1; S2 is
    slave of S1) replicating a non-transactional storage engine
    table (e.g.MyISAM) when set relay_log_info_repository=TABLE;
    and binlog rotation occurs in the middle of statement that
    was translated to multiple rows, then you loose part of that
    events.
    
    When binlog rotation occurs, on S1 not all rows are written
    to it's binlog, therefore S2 seamlessly looses part of rows
    that were translated from one statement to several rows.
    
    Fix:
    ===
    The above mentioned bugs got fixed as part of BUG#16418100
    fix. Adding additional test cases to improve test coverage.

commit 82e2842071513c6356f37a4d73577d33624a1f5e
Merge: 01a0558 22beefb
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Fri Nov 13 18:17:59 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 22beefba1ee608eee7106e078eb7470f515b6c26
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Fri Nov 13 18:04:31 2015 +0530

    Bug#20691429 ASSERTION `CHILD_L' FAILED IN STORAGE/MYISAMMRG/
    HA_MYISAMMRG.CC:631
    
    Analysis
    ========
    Any attempt to open a temporary MyISAM merge table consisting
    of a view in its list of tables (not the last table in the list)
    under LOCK TABLES causes the server to exit.
    
    Current implementation doesn't perform sanity checks during
    merge table creation. This allows merge table to be created
    with incompatible tables (table with non-myisam engine),
    views or even with table doesn't exist in the system.
    
    During view open, check to verify whether requested view
    is part of a merge table is missing under LOCK TABLES path
    in open_table(). This leads to opening of underlying table
    with parent_l having NULL value. Later when attaching child
    tables to parent, this hits an ASSERT as all child tables
    should have parent_l pointing to merge parent. If the operation
    does not happen under LOCK TABLES mode, open_table() checks
    for view's parent_l and returns error.
    
    Fix:
    ======
    Check added before opening view Under LOCK TABLES in open_table()
    to verify whether it is part of merge table. Error is returned
    if the view is part of a merge table.

commit 01a0558be8c6f67a44ce2e2aa5777826912f49df
Merge: 0d33177 3b2afdc
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Fri Nov 13 17:53:46 2015 +0530

    Merge branch 'mysql-5.5' into mysql-5.6

commit 3b2afdc52ec0e04119e133cf0959ea1f802711ea
Author: Ajo Robert <ajo.robert@oracle.com>
Date:   Fri Nov 13 17:51:18 2015 +0530

    Bug#19817021 CRASH IN TABLE_LIST::PREPARE_SECURITY WHEN
    DOING BAD DDL IN PREPARED STATEMENT
    
    Analysis
    ========
    A repeat execution of the prepared statement 'ALTER TABLE v1
    CHECK PARTITION' where v1 is a view leads to server exit.
    
    ALTER TABLE ... CHECK PARTITION is not applicable for views
    and check for the same check is missing. This leads to
    further execution and creation of derived table for the view
    (Allocated under temp_table mem_root). Any reference to open
     view or related pointers from second execution leads to
    server exit as the same was freed at previous execution closure.
    
    Fix:
    ======
    Added check for view in mysql_admin_table() on PARTITION
    operation. This will prevent mysql_admin_table() from
    going ahead and creating temp table and related issues.
    Changed message on admin table view operation error to
    be more appropriate.

commit 0d3317794f4321cdf270229bd802ad0aa736e1c3
Author: Mayank Prasad <mayank.prasad@oracle.com>
Date:   Fri Nov 13 11:29:55 2015 +0530

    Bug #21765843 : ASSERTION `PFS->M_PROCESSLIST_ID != 0' FAILED IN TABLE_SESSION_CONNECT.CC:254
    
    Before this fix:
    	BACKGROUND threads are not supposed to have connection attributes
    	and therefore an ASSERT was in place to make sure this code point
    	is not reached from a BACKGROUND thread.
    	But when server is started with --thread-handling=no-threads, no
    	FOREGROUND thread is created for client connection and this code
    	point is reached when a query to table session_connect_attrs is
    	made, and an ASSERT was seen.
    
    After this fix:
    	Instead of an ASSERT, silently return from the function.

commit 5485716b211ff55536a26c98ec654dd8ff6e2b55
Author: Lars Tangvald <lars.tangvald@oracle.com>
Date:   Thu Nov 12 11:15:30 2015 +0100

    Bug#22147191	NO PACKAGE FOR UBUNTU 15.10
    
    * Adds wily (15.10) packaging

commit 39a430e785a0bedaea8287f4a6cb809b18cb6f2b
Author: Mikhail Izioumtchenko <michael.izioumtchenko@oracle.com>
Date:   Mon Nov 9 22:04:41 2015 +0100

    remove n test, nowadays it has its own repository

commit 09eba96242103471dcb3e8512c0cbd478201b7f9
Merge: 94e193c c848ba1
Author: Bjorn Munch <bjorn.munch@oracle.com>
Date:   Mon Nov 9 15:28:27 2015 +0100

    Raise version number after cloning 5.6.28

commit c848ba1c1adad3792a57cb412e84c3828a652d67
Author: Bjorn Munch <bjorn.munch@oracle.com>
Date:   Mon Nov 9 15:25:01 2015 +0100

    Raise version number after cloning 5.5.47